Emotions like fear are not something inherited from our unconscious animal past. Instead they arise from the higher-order aspects that make human thought conscious. That (if I’ve got it right) is the gist of an interesting paper by LeDoux and Brown.

A mainstream view of fear (the authors discuss fear in particular as a handy example of emotion, on the assumption that similar conclusions apply to other emotions) would make it a matter of the limbic system, notably the amygdala, which is known to be associated with the detection of threats. People whose amygdalas have been destroyed become excessively trusting, for example – although as always things are more complicated than they seem at first and the amygdalas are much more than just the organs of ‘fear and loathing’. LeDoux and Brown would make fear a cortical matter, generated only in the kind of reflective consciousness possessed by human beings.

One immediate objection might be that this seems to confine fear to human beings, whereas it seems pretty obvious that animals experience fear too. It depends, though, what we mean by ‘fear’. LeDoux and Brown would not deny that animals exhibit aversive behaviour, that they run away or emit terrified noises; what they are after is the actual feeling of fear. LeDoux and Brown situate their concept of fear in the context of philosophical discussion about phenomenal experience, which makes sense but threatens to open up a larger can of worms – nothing about phenomenal experience, including its bare existence, is altogether uncontroversial. Luckily I think that for the current purposes the deeper issues can be put to one side; whether or not fear is a matter of ineffable qualia we can probably agree that humanly conscious fear is a distinct thing. At the risk of begging the question a bit we might say that if you don’t know you’re afraid, you’re not feeling the kind of fear LeDoux and Brown want to talk about.

On a traditional view, again, fear might play a direct causal role in behaviour. We detect a threat, that causes the feeling of fear, and the feeling causes us to run away. For LeDoux and Brown, it doesn’t work like that. Instead, while the threat causes the running away, that process does not in itself generate the feeling of fear. Those sub-cortical processes, along with other signals, feed into a separate conscious process, and it’s that that generates the feeling.

Another immediate objection therefore might be that the authors have made fear an epiphenomenon; it doesn’t do anything. Some, of course, might embrace the idea that all conscious experience is epiphenomenal; a by-product whose influence on behaviour is illusory. Most people, though, would find it puzzling that the brain should go to the trouble of generating experiences that never affect behaviour and so contribute nothing to survival.

The answer here, I think, comes from the authors’ view of consciousness. They embrace a higher-order theory (HOT). HOTs (there are a number of variations) say that a mental state is conscious if there is another mental state in the same mind which is about it – a Higher Order Representation (HOR); or to put it another way, being conscious is being aware that you’re aware. If that is correct, then fear is a natural result of the application of conscious processes to certain situations, not a peculiar side-effect.

HOTs have been around for a long time: they would always get a mention in any round-up of the contenders for an explanation of consciousness, but somehow it seems to me they have never generated the little bursts of excitement and interest that other theories have enjoyed. LeDoux and Brown suggest that other theories of emotion and consciousness either are ‘first -order’ theories explicitly, or can be construed as such. They defend the HOT concept against one of the leading objections, which is that it seems to be possible to have HORs of non-existent states of awareness. In Charles Bonnet, syndrome, for example, people who are in fact blind have vivid and complex visual hallucinations. To deal with this, the authors propose to climb one order higher; the conscious awareness, they suggest, comes not from the HOR of a visual experience but from the HOR of a HOR: a HOROR, in fact. There clearly is no theoretical limit to the number of orders we can rise to, and there’s some discussion here about when and whether we should call the process introspection.

I’m not convinced by HOTs myself. The authors suggest that single-order theory implies there can be conscious states of which we are not aware, which seems sort of weird: you can feel fear and not know you’re feeling fear? I think there’s a danger here of equivocating between two senses of ‘aware’. Conscious states are states of awareness, but not necessarily states we are aware of; something is in awareness if we are conscious; but that’s not to say that the something includes our awareness itself. I would argue, contrarily, that there must be states of awareness with no HOR; otherwise, what about the HOR itself? If HORs are states of awareness themselves, each must have its own HOR, and so on indefinitely. If they’re not, I don’t see how the existence of an inert representation can endow the first-order state with the magic of consciousness.

My intuitive unease goes a bit wider than that, too. The authors have given a credible account of a likely process, but on this account fear looks very like other conscious states. What makes it different – what makes it actually fearful? It seems possible to imagine that I might perform the animal aversive behaviour, experience a conscious awareness of the threat and enter an appropriate conscious state without actually feeling fear. I have no doubt more could be said here to make the account more plausible and in fairness LeDoux and Brown could well reply that nobody has a knock-down account of phenomenal experience, with their version offering a lot more than some.

In fact, even though I don’t sign up for a HOT I can actually muster a pretty good degree of agreement nonetheless. Nobody, after all, believes that higher order mental states don’t exist (we could hardly be discussing this subject if they didn’t). In fact, although I think consciousness doesn’t require HORs, I think they are characteristic of its normal operation and in fact ordinary consciousness is a complex meld of states of awareness at several different levels. If we define fear the way LeDoux and Brown do, I can agree that they have given a highly plausible account of how it works without having to give up my belief that simple first-order consciousness is also a thing.

 

Scott Bakker’s alien consciousnesses are back, and this time it’s peer-reviewed.  We talked about their earlier appearance in the Three Pound Brain a while ago, and now a paper in the JCS sets out a new version.

The new paper foregrounds the idea of using hypothetical aliens as a forensic tool for going after the truth about our own minds; perhaps we might call it xenophenomenology. That opens up a large speculative space, though it’s one which is largely closed down again here by the accompanying assumption that our aliens are humanoid, the product of convergent evolution. In fact, they are now called Convergians, instead of the Thespians of the earlier version.

In a way, this is a shame. On the one hand, one can argue that to do xenophenomenology properly is impractical; it involves consideration of every conceivable form of intelligence, which in turn requires an heroic if not god-like imaginative power which few can aspire to (and which would leave the rest of us struggling to comprehend the titanic ontologies involved anyway). But if we could show that any possible mind would have to be x, we should have a pretty strong case for xism about human beings. In the present case not much is said about the detailed nature of the Convergian convergence, and we’re pretty much left to assume that they are the same as us in every important respect. This means there can be no final reveal in which – aha! – it turns out that all this is true of humans too! Instead it’s pretty clear that we’re effectively talking about humans all along.

Of course, there’s not much doubt about the conclusion we’re heading to here, either: in effect the Blind Brain Theory (BBT). Scott argues that as products of evolution our minds are designed to deliver survival in the most efficient way possible. As a result they make do with a mere trickle of data and apply cunning heuristics that provide a model of the world which is quick and practical but misleading in certain important respects. In particular, our minds are unsuited to metacognition – thinking about thinking – and when we do apply our minds to themselves the darkness of those old heuristics breeds monsters: our sense of our selves as real, conscious agents and the hard problems of consciousness.

This seems to put Scott in a particular bind so far as xenophenomenology is concerned. The xenophenomenological strategy requires us to consider objectively what alien minds might be like; but Scott’s theory tells us we are radically incapable of doing so. If we are presented with any intelligent being, on his view those same old heuristics will kick in and tell us that the aliens are people who think much like us. This means his conclusion that Convergians would surely suffer the same mental limitations as us appears as merely another product of faulty heuristics, and the assumed truth of his conclusion undercuts the value of his evidence.

Are those heuristics really that dominant? It is undoubtedly true that through evolution the brains of mammals and other creatures took some short cuts, and quite a few survive into human cognition, including some we’re not generally aware of. That seems to short-change the human mind a bit though; in a way the whole point of it is that it isn’t the prisoner of instinct and habit. When evolution came up with the human brain, it took a sort of gamble; instead of equipping it with good fixed routines, it set it free to come up with new ones, and even over-ride old instincts. That gamble paid off, of course, and it leaves us uniquely able to identify and overcome our own limitations.

If it were true that our view of human conscious identity were built in by the quirks of our heuristics, surely those views would be universal, but they don’t seem to be. Scott suggests that, for example, the two realms of sky and earth naturally give rise to a sort of dualism, and the lack of visible detail in the distant heavens predisposes Convergians (or us) to see it as pure and spiritual. I don’t know about that as a generalisation across human cultures (didn’t the Greeks, for one thing, have three main realms, with the sea as the third?). More to the point, it’s not clear to me that modern western ways of framing the problems of the human mind are universal. Ancient Egyptians divided personhood into several souls, not just one. I’ve been told that in Hindu thought the question of dualism simply never arises. In Shinto the line between the living and the material is not drawn in quite the Western way. In Buddhism human consciousness and personhood have been taken to be illusions for many centuries. Even in the West, I don’t think the concept of consciousness as we now debate it goes back very far at all – probably no earlier than the nineteenth century, with a real boost in the mid-twentieth (in Italian and French I believe one word has to do duty for both ‘consciousness’ and ‘conscience’, although we mustn’t read too much into that). If our heuristics condemn us to seeing our own conscious existence in a particular way, I wouldn’t have expected that much variation.

Of course there’s a difference between what vividly seems true and what careful science tells us is true; indeed if the latter didn’t reveal the limitations of our original ideas this whole discussion would be impossible. I don’t think Scott would disagree about that; and his claim that our cognitive limitations have influenced the way we understand things is entirely plausible. The question is whether that’s all there is to the problems of consciousness.

As Scott mentions here, we don’t just suffer misleading perceptions when thinking of ourselves; we also have dodgy and approximate impressions of physics. But those misperceptions were not Hard problems; no-one had ever really doubted that heavier things fell faster, for example. Galileo sorted several of these basic misperceptions out simply by being a better observer than anyone previously, and paying more careful attention. We’ve been paying careful attention to consciousness for some time now, and arguably it just gets worse.

In fairness that might rather short-change Scott’s detailed hypothesising about how the appearance of deep mystery might arise for Convergians; those, I think, are the places where xenophenomenology comes close to fulfilling its potential.

 

Or perhaps Chomsky’s endorsement of Isaac Newton’s mysterianism.  We tend to think of Newton as bringing physics to a triumphant state of perfection, one that lasted until Einstein, and with qualifications, still stands. Chomsky says that in fact Newton shattered the ambitions of mechanical science, which have never recovered; and in doing so he placed permanent limits on the human mind. He quotes Hume;

While Newton seemed to draw off the veil from some of the mysteries of nature, he shewed at the same time the imperfections of the mechanical philosophy; and thereby restored her ultimate secrets to that obscurity, in which they ever did and ever will remain.

What are they talking about? Above all, the theory of gravity, which relies on the unexplained notion of action at a distance. Contemporary thinkers regarded this as nonsensical, almost logically absurd: how could object A affect object B without contacting it and without and internediating substance? Newton, according to Chomsky, agreed in essence; but defended himself by saying that there was nothing occult in his own work, which stopped short where the funny stuff began.  Newton, you might say, described gravity precisely and provided solid evidence to back up his description; what he didn’t do at all was explain it.

The acceptance of gravity, according to Chomsky, involved a permanent drop in the standard of intelligibility that scientific theories required. This has large implications for the mind it suggests there might be matters beyond our understanding, and provides a particular example. But it may well be that the mind itself is, or involves, similar intractable difficulties.

Chomsky reckons that Darwin reinforced this idea. We are not angels, after all, only apes; all other creatures suffer cognitive limitations; why should we be able to understand everything? In fact our limitations are as important as our abilities in making us what we are; if we were bound by no physical limitations we should become shapeless globs of protoplasm instead of human beings, and the same goes for our minds. Chomsky distinguishes between problems and mysteries. What is forever a mystery to a dog or rat may be a solvable problem for us, but we are bound to have mysteries of our own.

I think some care is needed over the idea of permanent mysteries. We should recognise that in principle there may be several things that look mysterious, notably the following.

  1. Questions that are, as it were, out of scope: not correctly definable as questions at all: these are answerable even by God.
  2. Mysterian mysteries; questions that are not in themselves unanswerable, but which are permanently beyond the human mind.
  3. Questions that are answerable by human beings, but very difficult indeed.
  4. Questions that would be answerable by human beings if we had further information which we (a) either just don’t happen to have, or which (b) we could never have in principle.

I think it’s just an assumption that the problem of mind, and indeed, the problem of gravity, fall into category 2. There has been a bit of movement in recent decades, I think, and the possibility of 3 or 4(a) remains open.

I don’t think the evolutionary argument is decisive either. Implicitly Chomsky assumes an indefinite scale of cognitive abilities matched by an indefinite scale of problems. Creatures that are higher up the first get higher up the second, but there’s always a higher problem.  Maybe, though, there’s a top to the scale of problems? Maybe we are already clever enough in principle to tackle them all.

If this seems optimistic, think of Chomsky the Lizard, millions of years ago. Some organisms, he opines, can stick their noses out of the water. Some can leap out, briefly. Some crawl out on the beach for a while. Amphibians have to go back to reproduce. But all creatures have a limit to how far they can go from the sea. We lizards, we’ve got legs, lungs, and the right kind of eggs; we can go further than any other. That does not mean we can go all over the island. Evolution guarantees that there will always be parts of the island we can’t reach.

Well, depending on the island, there may be inaccessible parts, but that doesn’t mean legs and lungs have inbuilt limits. So just because we are products of evolution, it doesn’t mean there are necessarily questions of type 2 for us.

Chomsky mocks those who claim that the idea of reducing the mind to activity of the brain is new and revolutionary; it has been widely espoused for centuries, he says. He mentions remarks of Locke which I don’t know, but which resemble the famous analogy of Leibniz’s mill.

If we imagine that there is a machine whose structure makes it think, sense, and have perceptions, we could conceive it enlarged, keeping the same proportions, so that we could enter into it, as one enters a mill. Assuming that, when inspecting its interior, we will find only parts that push one another, and we will never find anything to explain a perception.

The thing about that is, we’ll never find anything to explain a mill, either. Honestly, Gottfried, all I see is pieces of wood and metal moving around; none of them have any milliness? How on earth could a collection of pieces of wood – just by virtue of being arranged in some functional way, you say – acquire completely new, distinctively molational qualities?

Is consciousness a matter of entropy in the brain? An intriguing paper by R. Guevara Erra, D. M. Mateos, R. Wennberg, and J.L. Perez Velazquez says

normal wakeful states are characterised by the greatest number of possible configurations of interactions between brain networks, representing highest entropy values.

What the researchers did, broadly, is identify networks in the brain that were operative at a given time, and then work out the number of possible configurations these networks were capable of. In general, conscious states were associated with states with high numbers of possible configurations – high levels of entropy.

That makes me wrinkle my forehead a bit because it doesn’t fit well with my layman’s grasp of the concept of entropy. In my mind entropy is associated with low levels of available energy and an absence of large complex structure. Entropy always increases, but can decrease locally, as in the case of the complex structures of life, by paying for the decrease with a bigger increase elsewhere; typically by using up available energy. On this view, conscious states – and high levels of possible configurations – look like they ought to be low entropy; but evidently the reverse is actually the case. The researchers also used the Lempel-Ziv measure of complexity, one with strong links to information content, which is clearly an interesting angle in itself.

Of the nine subjects, three were epileptic, which allowed comparisons to be made with seizure states as well as those of waking and sleeping states. Interestingly, REM sleep showed relatively high entropy levels, which intuitively squares with the idea that dreaming resembles waking a little more than  fully unconscious states – though I think the equation of REM sleep with dreaming is not now thought to be as perfect as it once seemed.

One acknowledged weakness in the research is that it was not possible to establish actual connection. The assumed networks were therefore based on synchronisation instead. However, synchronisation can arise without direct connection, and absence of synchronisation is not necessarily proof of the absence of connection.

Still, overall the results look good and the picture painted is intuitively plausible. Putting all talk of entropy and Lempel-Ziv aside, what we’re really saying is that conscious states fall in the middle of a notional spectrum: at one end of this spectrum is chaos, with neutrons firing randomly; at the other we have then all firing simultaneously in indissoluble lockstep.

There is an obvious resemblance here to the Integrated Information Theory (IIT) which holds that consciousness arises once the quantity of information being integrated passes a value known as Phi. In fact, the authors of the current paper situate it explicitly within the context of earlier work which suggests that the general principle of natural phenomena is the maximisation of information transfer. The read-across from the new results into terms of information processing is quite clear. The authors do acknowledge IIT, but just barely; they may be understandably worried that their new work could end up interpreted as mere corroboration for IIT.

My main worry about both is that they are very likely true, but may not be particularly enlightening. As a rough analogy, we might discover that the running of an internal combustion engine correlates strongly with raised internal temperature states. The absence of these states proves to be a pretty good practical guide to whether the engine is running, and we’re tempted to conclude that raised temperature is the same as running. Actually, though, raising the temperature artificially does not make the engine run, and there is in fact a complex story about running in which raised temperatures are not really central. So it might be that high entropy is characteristic of conscious states without that telling us anything useful about how those states really work.

But I evidently don’t really get entropy, so I might easily be missing the true significance of all this.

Anne Phillips says we have lots of rights over our bodies but no absolute authority to do what we like. John Harris doesn’t believe bodies are the sort of thing that can really be owned. Brooke Magnanti says that criticism of people’s choices about their bodies is framed in moral terms but often stems from disgust rooted in religious or other prejudices rather than in science.

Why should our rights to our bodies be any different to those we have over land, goods or money? I can think of seven arguments one could make.

The first is connection.  We cannot be separated from certain parts of our body (at least not while remaining alive), so our rights are inalienable in a way that our rights to other property are not. If we were fixed to a particular tract of land, then perhaps we should naturally have similarly special rights to that land?  What of the bits of us we can dispose of? Perhaps those are like crops grown on our special land, which we may sell or give away. There is, nevertheless, a presumption that these things are ours to begin with, which is perhaps why we feel Henrietta Lacks, whose cell culture was taken from her without permission or payment, and still thrives in labs around the world, was badly treated.

The second reason our rights to our bodies are treated as special might be a combination of precautionary hesitation and respect for humanity at large and human dignity. Generally the law and morality is concerned to give human beings special protection, and if our bodies are seen to be treated with disregard as if they were no more important than old clothes, that might encourage bad consequences for the treatment of human beings generally. But some people might abuse their special rights and be reckless and disrespectful to their own bodies, and so it seems to me that the respect arguments tends more to support the conclusion that bodies should not be property at all – not even our own.

Third is a sense that our body is a protected sphere – no-one else’s business. I think it’s impossible to deny the powerful appeal of this intuitively. It seems related to the moral view that you can do what you like so long as you don’t hurt anyone else. However, I can’t see any strong rational case for it apart from the other arguments considered here.

Fourth is a pragmatic argument similar to the one often advanced for private property in general. While there are no absolute rights in play, it is in practice likely that most of the time we will manage our own property best.  Society should therefore strive to limit its interventions and give us as much control as it can, consistent with other moral requirements.

The fifth argument is that rights over our own body are absolute and special, perhaps because they were given by God or because they are just a prima facie desirable good in themselves like (according to some) freedom.

Sixth, we might offer a more technical argument. Rights, we may notice, all appertain to persons. There cannot, then be any ownership of persons because then the recession of rights would have no final root and might be circular or indefinite. If we cannot have ownership rights over persons, then it is a natural extension to limit the ownership of rights over bodies.

The seventh reason suggests that a special ownership of our own bodies is invariably part of the implied social contract in place in any organised society. The deeper reasons may be matters of psychology or anthropology, but ultimately this is simply a human universal.

I think there’s at least something in all seven, but I still don’t think they completely over-ride society’s right to regulate the matter given sufficient justification.

Classic ‘Split Brain’ experiments by Sperry and Gazzaniga could not be reproduced, according to Yair Pinto of the University of Amsterdam. Has the crisis of reproducibility claimed an extraordinary new victim?

The original experiments studied patients who had undergone surgery to disconnect the two halves of their cerebrum by cutting the corpus callosum and associated commisures. This was a last-ditch method of controlling epilepsy, and it worked. The patients’ symptoms were improved and they did not generally suffer any cognitive deficits. They felt normal and were able to continue their ordinary lives.

However, under conditions that fed different information to the two halves of the brain, something strange showed up. Each half reported only its own data, unaware of what the other half was reporting. The effect is clearly demonstrated in this video with Gazzaniga himself…

On the basis of these remarkable results, it has been widely assumed that these patients have two consciousnesses, or two streams of consciousness, operating separately in each hemisphere but generally in such harmony they don’t notice each other. Now, though, performing similar experiments, Pinto and colleagues unexpectedly failed to get the same results. Is it conceivable that Sperry and Gazzaniga’s patients were to some degree playing along with the Doc, giving him the results he evidently wanted? It seems more likely that the effects are just different in different subjects for some reason. In fact, I believe the details of the surgery as well as the pre-existing condition of the patients varies. Recently the tendency has been towards less radical surgery and new drugs; in fact there may not be any more split-brain patients in future.

I don’t think that accounts for Pinto’s results, though, and they certainly raise interesting problems. In fact the interpretation of split-brain experiments has always been disputed. Although in experimental conditions the subjects seem divided, they don’t feel like two people and generally – not invariably -behave normally outside the lab. We do not have cases where the left hand and right hand write separate personal accounts, nor do these patients

Various people have offered interpretations which preserve the essential unity of consciousness, including Michael Tye and Charles E. Marks. One way of looking at it is to point out how unusual the experimental conditions are. Perhaps these conditions (and occasionally others that arise by chance) temporarily induce a bifurcation in an essentially united consciousness. Incidentally, so far as I know no-one has ever tried to repeat the experiments and get the same effects in people with an intact corpus callosum; maybe now and then it would work?

More recently Tim Bayne has proposed a switching model, in which a single consciousness is supported by different physical bits of brain, moving from one physical basis to another without sacrificing its essential unity.

There is, I think, a degree of neurological snobbery about the whole issue, inasmuch as it takes from granted that consciousness is situated in the cerebrum and that therefore dividing the cerebrum can be expected to divide consciousness. Descartes thought the essential unity of consciousness meant it could not reside in parts of the brain which even in normal people are extensively bifurcated into two hemispheres (actually into a number of quite distinct lobes). We need not follow him in plumping for the pineal gland as the seat of the soul – and we know that the cerebellum, for example, can be removed without destroying consciousness). But what about the lowly brain stem? Lizards make do with a brain that in evolutionary terms is little more than our brain stem, yet I’m inclined to believe they have some sense of self, and though they don’t have human cogitation, they certainly have consciousness in at least the basic sense. Perhaps we should see consciousness as situated as much down there as up in the cortex. Then we might interpret split-brain patients as having a single consciousness which has some particular lateralised difficulties in pulling together the contributions of its fancy cortical operations.

It might be that different people have different wiring patterns between lower and higher brain that either facilitate or suppress the split-brain effects; but that wouldn’t altogether explain why Pinto’s results are so different.

Secrets of the Mind is a lively IAI discussion of consciousness. Iain McGilchrist speaks intriguingly of matter being a phase of consciousness, and suggests the brain might simply be an expert transducer. Nicholas Humphrey, who seems to irritate McGilchrist quite a lot, thinks it’s all an illusion. As McGilchrist says, illusions seem to require consciousness, so there’s a problem there. I also wonder what it means to say that pain is an illusion. If I say a fairy palace is an illusion, I mean there is no real palace out there, just something in my mind, but if I say a pain is illusory, what thing out there am I denying the reality of? Roger Penrose unfortunately talks mainly about his (in my view) unappealing theory of microtubules rather than his more interesting argument against computationalism. His own description here suggests he settled on quantum mechanics just because it seemed the most likely place to find non-computational processes, which doesn’t seem a promising strategy to me.

Ultimately those secrets remain unrevealed;  McGilchrist (and Penrose in respect of the Hard Problem) seem to think they probably always will.

The EU is not really giving robots human rights; but some of its proposals are cause for concern. James Vincent on the Verge provides a sensible commentary which corrects the rather alarming headlines generated by the European Parliament draft report issued recently. Actually there are several reasons to keep calm. It’s only a draft, it’s only a report, it’s only the Parliament. It is said that in the early days the Eurocrats had to quickly suppress an explanatory leaflet which described the European Economic Community as a bus. The Commission was the engine, the Council was the driver; and the Parliament… was a passenger. Things have moved on since those days, but there’s still some truth in that metaphor.

A second major reason to stay calm, according to Vincent, is that the report (quite short and readable, by the way, though rather bitty) doesn’t really propose to treat robots as human beings; it mainly addresses the question of liability for the acts of autonomous robots. The sort of personhood it considers is more like the legal personhood of corporations. That’s true, although the report does invite trouble by cursorily raising the question of whether robots can be natural persons. Some other parts read strangely to me, perhaps because the report is trying to cover a lot of ground very quickly. At one point it seems to say that Asimov’s laws of robotics should currently apply to the designers and creators of robots; I suspect the thinking behind that somewhat opaque idea (A designer must obey orders given it by human beings except where such orders would conflict with the First Law?) has not been fully fleshed out in the text.

What about liability? The problem here is that if a robot does damage to property, turns on its fellow robots (pictured) or harms a human being, the manufacturer and the operator might disclaim responsibility because it was the robot that made the decision, or at least, because the robot’s behaviour was not reasonably predictable (I think predictability is the key point: the report has a rather unsatisfactory stab at defining smart autonomous robots, but for present purposes we don’t need anything too philosophical – it’s enough if we can’t tell in advance exactly what the machine will do).

I don’t see a very strong analogy with corporate personhood. In that case the problem is the plethora of agents; it simply isn’t practical to sue everyone involved in a large corporate enterprise, or even assign responsibility. It’s far simpler if you have a single corporate entity that can be held liable (and can also hold the assets of the enterprise, which it may need to pay compensation, etc). In that context the new corporate legal person simplifies the position, whereas with robots, adding a machine person complicates matters. Moreover, in order for the robot’s liability to be useful you would have to allow it to hold property which it could use to fund any liabilities. I don’t think anyone is currently thinking that roombas should come with some kind of dowry.

Note, however, that corporate personhood has another aspect; besides providing an entity to hold assets and sue or be sued, it typically limits the liability of the parties. I am not a lawyer, but as I understand it, if several people launch a joint enterprise they are all liable for its obligations; if they create a corporation to run the enterprise, then liability is essentially limited to the assets held by the corporation. This might seem like a sneaky way of avoiding responsibility; would we want there to be a similar get-out for robots? Let’s come back to that.

It seems to me that the basic solution to the robot liability problem is not to introduce another person, but to apply strict liability, an existing legal concept which makes you responsible for your product even if you could not have foreseen the consequences of using it in a particular case. The report does acknowledge this principle. In practice it seems to me that liability would partly be governed by the contractual relationship between robot supplier and user: the supplier would specify what could be expected given correct use and reasonable parameters – if you used your robot in ways that were explicitly forbidden in that contract, then liability might pass to you.

Basically, though, that approach leaves responsibility with the robot’s builder or supplier, which seems to be in line with what the report mainly advocates. In fact (and this is where things begin to get  a bit questionable) the report advocates a scheme whereby all robots would be registered and the supplier would be obliged to take out insurance to cover potential liability. An analogy with car insurance is suggested.

I don’t think that’s right. Car insurance is a requirement mainly because in the absence of insurance, car owners might not be able to pay for the damage they do. Making third-party insurance obligatory means that the money will always be there. In general I think we can assume by contrast that robot corporations, one way or another, will usually have the means to pay for individual incidents, so an insurance scheme is redundant. It might only be relevant where the potential liability was outstandingly large.

The thing is, there’s another issue here. If we need to register our robots and pay large insurance premiums, that imposes a real burden and a significant number of robot projects will not go ahead. Suppose, hypothetically, we have robots that perform crucial work in nuclear reactors. The robots are not that costly, but the potential liabilities if anything goes wrong are huge. The net result might be that nobody can finance the construction of these robots even though their existence would be hugely beneficial; in principle, the lack of robots might even stop certain kinds of plant from ever being built.

So the insurance scheme looks like a worrying potential block on European robotics; but remember we also said that corporate responsibility allows some limitation of liability. That might seem like a cheat, but another way of looking at it is to see it as a solution to the same kind of problem; if investors had to acept unlimited personal liability, there are some kinds of valuable enterprise that would just never be viable. Limiting liability allows ventures that otherwise would have potential downsides too punishing for the individuals involved. Perhaps, then, there actually is an analogy here and we ought to think about allowing some limitation of liability in the case of autonomous robots? Otherwise some useful machines may never be economically viable?

Anyway, I’m not a lawyer, and I’m not an economist, but I see some danger that an EU regime based on this report with registration, possibly licensing, and mandatory insurance, could significantly inhibit European robotics.

Further to the question of conscious vs non-conscious action, here’s a recent RSA video presenting some evidence.

Nicholas Shea presents with Barry Smith riding shotgun. There’s a mention for one piece of research also mentioned by Di Nucci; preventing expert golfers from concentrating consciously on their shot actually improves their performance (it does the opposite for non-experts). There are two pieces of audience participation; one shows that subliminal prompts can (slightly) affect behaviour; the other shows that time to think and discuss can help where explicit reasoning is involved (though it doesn’t seem to help the RSA audience much).

Perhaps in the end consciousness is not essentially private after all, but social and co-operative?

Sign of the times to see two philosophers unashamedly dabbling in experiments. I think the RSA also has to win some kind of prize in the hotly-contested ‘unconvincing brain picture’ category for using purple and yellow cauliflower.

Did my aliefs make me do that? Ezio Di Nucci thinks we need a different way of explaining automatic behaviour. By automatic, he means such things as habits and ‘primed’ or nudged behaviour; quite a lot of what we do is not the consequence of a consciously pondered decision; yet it manages to be complex and usually sort of appropriate.

Di Nucci quotes a number of examples, but the two he uses most are popcorn and rudeness. Popcorn is an example of a habit. People who were in the habit of eating a lot of popcorn at the cinema still ate the same amount even when they were given stale popcorn. People who did not have the habit ate significantly less if it was stale; as did people who had the habit if they were given it outside the context of a cinema visit (showing that they did notice the difference; it wasn’t just that they didn’t care about the quality of the popcorn).

Rudeness is an example of ‘primed’ behaviour. Subjects were exposed to lists of words which either exemplified rudeness or courtesy; those who had been “primed” with rude words went on to interrupt an interviewer more often. There are many examples of this kind of priming effect, but there is now a problem, as Di Nucci notes; many of these experiments have been badly hit by the recent wave of problems with reproducibility in psychological experiments. It is so bad that some might advocate putting the whole topic of priming aside for now until more of the underlying research has been underpinned. Di Nucci proceeds anyway; even if particular studies are invalidated the principle probably captures something true about human psychology.

So how do we explain this kind of mindless behaviour? Di Nucci begins by examining what he characterises as a ‘traditional’ account given by action theory, particularly citing Donald Davidson. On this view an action is intentional if, described in a particular way, it corresponds with a ‘pro’ attitude in the agent towards actions with a particular property and the belief that the section has that property. For Di Nucci this comes down to the simple view that an action is intentional if it corresponds with a pre-existing intention.

On that basis, it doesn’t seem we can say that the primed subject interrupted unintentionally. It doesn’t really seem that we can say that the subject ate stale popcorn unintentionally either. We might try to say that they intended to eat popcorn and did not intend to eat stale popcorn; but since they clearly knew after the first mouthful that it was stale, this doesn’t really work. We’re left with no distinct account of automatic behaviour.

Di Nucci cites two other arguments. In another experiment, people primed to be helpful are more likely to pick up another person’s dropped pen; but if the pen is visibly leaking the priming effect is eliminated. The priming cannot be operating at the same level as the conscious reluctance to get ink on one’s fingers or the latter would not erase the former.

Another way to explain the automatic behaviour is to attribute it to false beliefs; but that would imply that the behaviour was to some degree irrational, and neither interrupting nor eating popcorn is actually irrational.

What, then, about aliefs? This is the very handy concept introduced by Tamar Szabo Gendler back in 2008; in essence aliefs are non-conscious beliefs. They explain why we feel nervous walking on a glass floor; we believe we’re safe, but alieve we’re about to fall. They may also explain unconscious bias and many other normal but irrational kinds of behaviour. I rather like this idea; since beliefs and desires are often treated as a pair in studies of intentionality, maybe we could have the additional idea of unconscious desires, or cesires; then we have the nicely alphabetic set of aliefs, beliefs, cesires and desires.

Aliefs look just right to deal with our problems. We might believe the popcorn is stale, but alieve that cinema popcorn is good. We might believe ourselves polite but alieve that interrupting is fine.

Di Nucci suggests three problems. First, he doesn’t think aliefs deal with the leaky pen, where the inclination to help disappears altogether, not partially or subject to some conflict. Second, he thinks aliefs end up looking like beliefs. He cites the case of George Clooney fans who are less willing to pay for George’s bandanna if it has been washed. Allegedly this because of an irrational alief that a bandanna contains George’s essence; but the conscious belief that it has been washed interferes with this. If aliefs can interact with beliefs like this they must be similarly explicit and propositional and so not really different from beliefs. To me this argument doesn’t carry much force because there seem to be lots of better ways we could account for the Clooney fan behaviour.

Third, he thinks again that irrationality is a problem; aliefs are supposed by Gendler to be arational, but the two regular examples of automatic behaviour seem rational enough.

I think aliefs work rather well; if anything they work too well. We can ask about beliefs and challenge them; aliefs are not accessible and we are free to attribute any mad aliefs we like to anyone if they get our explanatory dirty work done. There’s perhaps too much of a “get out jail free” about that.

Anyway, if all that is to be rejected, what is the explanation? Di Nucci suggests that automatic behaviour simply fills in when we don’t want to waste attention on the detail. Specifically, he suggests they come into play in “Buridan’s Ass” cases; where we are faced with choices between alternatives that are equally good or neutral, as often happens. It’s pointless to direct our attention to these meaningless choices, so they are left to whatever habits or priming we may be subject to.

That’s fine as far as it goes, but I wonder whether it doesn’t just retreat a bit; the question was really how well-formed actions come from non-conscious thought. Di Nucci seems in danger of telling us that automatic action is accounted for by the fact that it is automatic.