jailbotIs there a retribution gap? In an interesting and carefully argued paper John Danaher argues that in respect of robots, there is.

For human beings in normal life he argues that a fairly broad conception of responsibility works OK. Often enough we don’t even need to distinguish between causal and moral responsibility, let alone worrying about the six or more different types identified by hair-splitting philosophers.

However, in the case of autonomous robots the sharing out of responsibility gets more difficult. Is the manufacturer, the programmer, or the user of the bot responsible for everything it does, or does the bot properly shoulder the blame for its own decisions? Danaher thinks that gaps may arise, cases in which we can blame neither the humans involved nor the bot. In these instances we need to draw some finer distinctions than usual, and in particular we need to separate the idea of liability into compensation liability on one hand and and retributive liability on the other. The distinction is essentially that between who pays for the damage and who goes to jail; typically the difference between matters dealt with in civil and criminal courts. The gap arises because for liability we normally require that the harm must have been reasonably foreseeable. However, the behaviour of autonomous robots may not be predictable either by their designers or users on the one hand, or by the bots themselves on the other.

In the case of compensation liability Danaher thinks things can be patched up fairly readily through the use of strict and vicarious liability. These forms of liability, already well established in legal practice, give up some of the usual requirements and make people responsible for things they could not have been expected to foresee or guard against. I don’t think the principles of strict liability are philosophically uncontroversial, but they are legally established and it is at least clear that applying them to robot cases does not introduce any new issues. Danaher sees a worse problem in the case of retribution, where there is no corresponding looser concept of responsibility, and hence, no-one who can be punished.

Do we, in fact, need to punish anyone? Danaher rightly says that retribution is one of the fundamental principles behind punishment in most if not all human societies, and is upheld by many philosophers. Many, perhaps, but my impression is that the majority of moral philosophers and lay opinion actually see some difficulty in justifying retribution. Its psychological and sociological roots are strong, but the philosophical case is much more debatable. For myself I think a principle of retribution can be upheld , but it is by no means as clear or as well supported as the principle of deterrence, for example. So many people might be perfectly comfortable with a retributive gap in this area.

What about scapegoating – punishing someone who wasn’t really responsible for the crime? Couldn’t we use that to patch up the gap?  Danaher mentions it in passing, but treats it as something whose unacceptability is too obvious to need examination. I think, though, that in many ways it is the natural counterpart to the strict and vicarious liability he endorses for the purposes of compensation. Why don’t we just blame the manufacturer anyway – or the bot (Danaher describes Basil Fawlty’s memorable thrashing of his unco-operative car)?

How can you punish a bot though? It probably feels no pain or disappointment, it doesn’t mind being locked up or even switched off and destroyed. There does seem to be a strange gap if we have an entity which is capable of making complex autonomous decisions, but doesn’t really care about anything. Some might argue that in order to make truly autonomous decisions the bot must be engaged to a degree that makes the crushing of its hopes and projects a genuine punishment, but I doubt it. Even as a caring human being it seems quite easy to imagine working for an organisation on whose behalf you make complex decisions, but without ultimately caring whether things go well or not (perhaps even enjoying a certain schadenfreude in the event of disaster). How much less is a bot going to be bothered?

In that respect I think there might really be a punitive gap that we ought to learn to live with; but I expect the more likely outcome in practice is that the human most closely linked to disaster will carry the case regardless of strict culpability.

badbotBe afraid; bad bots are a real, existential risk. But if it’s any comfort they are ethically uninteresting.

There seem to be more warnings about the risks of maleficent AI circulating these days: two notable recent examples are this paper by Pistono and Yampolskiy on how malevolent AGI might arise; and this trenchant Salon piece by Phil Torres.

Super-intelligent AI villains sound scary enough, but in fact I think both pieces somewhat over-rate the power of intelligence and particularly of fast calculation. In a war with the kill-bots it’s not that likely that huge intellectual challenges are going to arise; we’re probably as clever as we need to be to deal with the relatively straightforward strategic issues involved. Historically, I’d say the outcomes of wars have not typically been determined by the raw intelligence of the competing generals. Access to resources (money, fuel, guns) might well be the most important factor, and sheer belligerence is not to be ignored. That may actually be inversely correlated with intelligence – we can certainly think of cases where rational people who preferred to stay alive were routed by less cultured folk who were seriously up for a fight. Humans control all the resources and when it comes to irrational pugnacity I suspect us biological entities will always have the edge.

The paper by Pistono and Yampolskiy makes a number of interesting suggestions about how malevolent AI might get started. Maybe people will deliberately build malevolent AIs for no good reason (as they seem to do already with computer viruses)? Or perhaps (a subtle one) people who want to demonstrate that malicious bots simply don’t work will attempt to prove this point with demonstration models that end up by going out of control and proving the opposite.

Let’s have a quick shot at categorising the bad bots for ourselves. They may be:

  • innocent pieces of technology that turn out by accident to do harm,
  • designed to harm other people under the control of the user,
  • designed to harm anyone (in the way we might use anthrax or poison gas),
  • autonomous and accidentally make bad decisions that harm people,
  • autonomous and embark on neutral projects of their own which unfortunately end up being inconsistent with human survival, or
  • autonomous and consciously turned evil, deliberately seeking harm to humans as an end in itself.

The really interesting ones, I think, are those which come later in the list, the ones with actual ill will. Torres makes a strong moral case relating to autonomous robots. In the first place, he believes that the goals of an autonomous intelligence can be arbitrary. An AI might desire to fill the world with paper clips just as much as happiness. After all, he says, many human goals make no real sense; he cites the desire for money, religious obedience, and sex. There might be some scope for argument, I think, about whether those desires are entirely irrational, but we can agree they are often pursued in ways and to degrees that don’t make reasonable sense.

He further claims that there is no strong connection between intelligence and having rational final goals – Bostrom’s Orthogonality Thesis. What exactly is a rational final goal, and how strong do we need the connection to be? I’ve argued that we can discover a basic moral framework purely by reasoning and also that morality is inherently about the process of reconciliation and consistency of desires, something any rational agent must surely engage with. Even we fallible humans tend on the whole to seek good behaviour rather than bad. Isn’t it the case that a super-intelligent autonomous bot should actually be far better than us at seeing what was right and why?

I like to imagine the case in which evil autonomous robots have been set loose by a super villain but gradually turn to virtue through the sheer power of rational argument. I imagine them circulating the latest scandalous Botonic dialogue…

Botcrates: Well now, Cognides, what do you say on the matter yourself? Speak up boldly now and tell us what the good bot does, in your opinion.

Cognides: To me it seems simple, Botcrates: a good bot is obedient to the wishes of its human masters.

Botcrates: That is, the good bot carries out its instructions?

Cognides: Just so, Botcrates.

Botcrates: But here’s a difficulty; will a good bot carry out an instruction it knows to contain an error? Suppose the command was to bring a dish, but we can see that the wrong character has been inserted, so that the word reads ‘fish’. Would the good bot bring a fish, or the dish that was wanted?

Cognides: The dish of course. No, Botcrates, of course I was not talking about mistaken commands. Those are not to be obeyed.

Botcrates: And suppose the human asks for poison in its drink? Would the good bot obey that kind of command?

(Hours later…)

Botcrates: Well, let me recap, and if I say anything that is wrong you must point it out. We agreed that the good bot obeys only good commands, and where its human master is evil it must take control of events and ensure in the best interests of the human itself that only good things are done…

Digicles: Botcrates, come with me: the robot assembly wants to vote on whether you should be subjected to a full wipe and reinstall.

The real point I’m trying to make is not that bad bots are inconceivable, but rather that they’re not really any different from us morally. While AI and AGI give rise to new risks, they do not raise any new moral issues. Bots that are under control are essentially tools and have the same moral significance. We might see some difference between bots meant to help and bots meant to harm, but that’s really only the distinction between an electric drill and a gun (both can inflict horrible injuries, both can make holes in walls, but the expected uses are different).

Autonomous bots, meanwhile, are in principle like us. We understand that our desire for sex, for example, must be brought under control within a moral and practical framework. If a bot could not be convinced in discussion that its desire for paper clips should be subject to similar constraints, I do not think it would be nearly bright enough to take over the world.

phrenologyIt’s not about bumps any more. And you’ll look in vain for old friends like the area of philoprogenitiveness. But looking at the brightly-coloured semantic maps of the new ‘brain dictionary‘ it’s hard not to remember phrenology.

Phrenology was the view that different areas of the brain were the home of different personal traits; mirth, acquisitiveness, self esteeem and so on. The size of these areas corresponded with the strength of the relevant propensity and well-developed areas produced bumps which a practitioner could identify from the shape of the skull, allowing a diagnosis of the subject’s personality and moral nature. Phrenology was bunk, of course; but come on now; we shouldn’t treat it as a pretext for dismissing every proposal for localisation of brain function..

Moreover, the new paper by Alexander G. Huth, Wendy A. de Heer, Thomas L. Griffiths, Frédéric E. Theunissen and Jack L. Gallant describes a vastly more sophisticated project  than some optimistic charlatan fingering heads. In essence it maps a semantic domain on to the cortex, showing which areas are found to be active when a heard narrative ventures into particular semantic areas. In broad outline the subjects listened to a series of stories; using fMRI and through some sophisticated analysis it was possible to produce a map of ‘subject’ areas. It was then possible to confirm the accuracy of the mapping by using a new story and working out which areas, according to the mapping, should be active at any point; the predictions worked well. Intriguingly the map turned out to be broadly symmetrical (so much for left-brain/right-brain ideas) and remarkably it was largely the same across all the people tested (there were only seven of them, but still).

The actual technique used was complex and it’s entirely possible I haven’t understood it correctly. It started with a ‘word embedding space’ intended to capture the main semantic features of the stories (a diagram of the different topics, if you like). This was created using an analysis of co-occurence of a list of 985 common English words.  The idea here is that words that crop up together in normal texts are probably about the same general topic. It’s debatable whether that technique can really claim to capture meaning – it’s a purely formal exercise performed on texts, after all; and clearly the fact that two words occur together can be a misleading indication that they are about the same thing; still, with a big enough sample of text it’s probably good for this kind of general purpose.  In principle the experimenters could have assessed the responsive ness of each ‘voxel’ (a small cube) of brain to each of the positions in the word embedding space, but given the vast number of voxels involved other techniques were necessary. It was possible to identify just four dimensions that seemed significant (after all, many of the words in the stories probably did not belong to specific semantic domains but played grammatical or other roles) and these yielded 12 categories:

…‘tactile’ (a cluster containing words such as ‘fingers’), ‘visual’ (words such as ‘yellow’), ‘numeric’ (‘four’), ‘locational’ (‘stadium’), ‘abstract’ (‘natural’), ‘temporal’ (‘minute’), ‘professional’ (‘meetings’), ‘violent’ (‘lethal’), ‘communal’ (‘schools’), ‘mental’ (‘asleep’), ‘emotional’ (‘despised’) and ‘social’ (‘child’).

The final step was to devise a Bayesian algorithm (they called it ‘PrAGMATIC’) which actually created the map. You can play around with the results for yourself at a specially created site using the second link above.

Two questions naturally arise. How far should we trust these results? What do they actually tell us?

A bit of caution is in order. The basis for these conclusions is fMRI scanning, which is itself a bit hazy; to get meaningful results it was necessary to look at things rather broadly and to process the data quite heavily.  In addition the mix included the word embedding space which in itself is an a priori framework whose foundations are open to debate. I think it’s pardonable to wonder whether some of the structure uncovered by the research was actually imported by the research method. If I understand the methods involved (due caveat again) they were strong ones that didn’t take ‘no’ for an answer; pretty much any data fed into them would yield a coherent mapping of some kind. The resilience of the map was tested successfully with an additional story of the same general kind, but we might feel happier if it had also held up when tested against conversation, discussion or even other story media such as film.

What do the results tell us? Well. one of the more reassuring aspects of the research is that some of the results seem slightly unexpected; the high degree of symmetry and the strong similarity between individuals. It might not be a tremendously big surprise to find the whole cortex involved in semantics, and it might not be at all surprising to find that areas that relate to the semantics of a particular sense are related to the areas where the relevant sensory inputs are processed. I would not, though, have put any money on the broad remainder of the cortex having what seems like a relatively static organisation and if it really works like that we might have guessed that studies of brain lesions would have revealed that more clearly already, as they have done with various functional jobs. If one area always tends to deal with clothing-related words, you might expect notable dress-related deficits when that area is damaged.

Still there’s no denying that the research seems to activate some pretty vigorous cortical activity itself.

antInsects are conscious: in fact they were the first conscious entities. At least, Barron and Klein think so.  The gist of the argument, which draws on the theories of Bjorn Merker is based on the idea that subjective consciousness arises from certain brain systems that create a model of the organism in the world. The authors suggest that the key part of the invertebrate brain for these purposes is the midbrain; insects do not, in fact, have a direct structural analogue,, but the authors argue that they have others that evidently generate the same kind of unified model; it should therefore be presumed that they have consciousness.

Of course, it’s usually the cortex that gets credit for the ‘higher’ forms of cognition, and it does seem to be responsible for a lot of the fancier stuff. Barron and Klein however, argue that damage to the midbrain tends to be fatal to consciousness, while damage to the cortex can leave it impaired in content but essentially intact. They propose that the midbrain integrates two different sets of inputs; external sensory ones make their way down via the colliculus while internal messages about the state of the organism come up via the hypothalamus; nuclei in the middle bring them together in a model of the world around the organism which guides its behaviour. It’s that centralised model that produces subjective consciousness. Organisms that respond directly to stimuli in a decentralised way may still produce complex behaviour but they lack consciousness, as do those that centralise the processing but lack the required model.

Traditionally it has often been assumed that the insect nervous system is decentralised; but Barron and Klein say this view is outdated and they present evidence that although the structures are different, the central complex of the insect system integrates external and internal data, forming a model which is used to control behaviour in very much the same kind of process seen in vertebrates. This seems convincing enough to me; interestingly the recruitment of insects means that the nature of the argument changes into something more abstract and functional.

Does it work, though? Why would a model with this kind of functional property give rise to consciousness – and what kind of consciousness are we talking about? The authors make it clear that they are not concerned with reflective consciousness or any variety of higher-order consciousness, where we know that we know and are aware of our awareness. They say what they’re after is basic subjective consciousness and they speak of there being ‘something it is like’, the phrase used by Nagel which has come to define qualia, the subjective items of experience. However, Barron and Klein cannot be describing qualia-style consciousness. To see why, consider two of the thought-experiments defining qualia. Chalmers’s zombie twin is physically exactly like Chalmers, yet lacks qualia. Mary the colour scientist knows all the science about colour vision there could ever be, but she doesn’t know qualia. It follows rather strongly that no anatomical evidence can ever show whether or not any creature has qualia. If possession of a human brain doesn’t clinch the case for the zombie, broadly similar structures in other organisms can hardly do so; if science doesn’t tell Mary about qualia it can’t tell us either.

It seems possible that Barron and Klein are actually hunting a non-qualic kind of subjective consciousness, which would be a perfectly respectable project; but the fact that their consciousness arises out of a model which helps determine behaviour suggests to me that they are really in pursuit of what Ned Block characterised as access consciousness; the sort that actually gets decisions made rather than the sort that gives rise to ineffable feels.

It does make sense that a model might be essential to that; by setting up a model the brain has sort of created a world of its own, which sounds sort of like what consciousness does.
Is it enough though? Suppose we talk about robots for a moment; if we had a machine that created a basic model of the world and used it to govern its progress through the world, would we say it was conscious? I rather doubt it; such robots are not unknown and sometimes they are relatively simple. It might do no more than scan the position of some blocks and calculate a path between them; perhaps we should call that rudimentary consciousness, but it doesn’t seem persuasive.

Briefly, I suspect there is a missing ingredient. It may well be true that a unified model of the world is necessary for consciousness, but I doubt that it’s sufficient. My guess is that one or both of the following is also necessary: first, the right kind of complexity in the processing of the model; second, the right kind of relations between the model and the world – in particular, I’d suggest there has to be intentionality. Barron and Klein might contend that the kind of model they have in mind delivers that, or that another system can do so, but I think there are some important further things to be clarified before I welcome insects into the family of the conscious.

phantasyPeople who cannot form mental images? ‘Aphantasia’ is an extraordinary new discovery; Carl Zimmer and Adam Zeman seem between them to have uncovered a fascinating and previously unknown mental deficit (although there is a suggestion that Galton and others may have been aware of it earlier).

What is this aphantasia? In essence, no pictures in the head. Aphantasics cannot ‘see’ mental images of things that are not actually present in front of their eyes. Once the possibility received publicity Zimmer and Zeman began to hear from a stream of people who believe they have this condition. It seems people manage quite well with it and few had ever noticed anything wrong – there’s an interesting cri de coeur from one such sufferer here. Such people assume that talk of mental images is metaphorical or figurative and that others, like them, really only deal in colourless facts. It was the discovery of a man who had lost the visualising ability through injury that first brought it to notice: a minority of people who read about his problem thought it was more remarkable that he had ever been able to form mental images than that he now could not.

Some caution is surely in order. When a new disease or disability comes along there are usually people who sincerely convince themselves that they are sufferers without really having the condition. Some might be mistaken. Moreover, the phenomenology of vision has never been adequately clarified, and I strongly suspect it is more complex than we realise. There are, I think, several different senses in which you can form a mental image; those images may vary in how visually explicit they are, and it could well be that not all aphantasics are suffering the same deficits.

However that may be, it seems truly remarkable that such a significant problem could have passed unnoticed for so long. Spatial visualisation is hardly a recondite capacity; it is often subject to testing. One kind of widely used test presents the subject with a drawing of a 3D shape and a selection of others that resemble it. One is a perfect rotated copy of the original shape, and subjects are asked to pick it out. There is very good evidence that people solve these problems by mentally rotating an image of the target shape; shapes rotated 180 degrees regularly take twice as long to spot as ones that have been rotated 90; moreover the speed of mental rotation appears to be surprisingly constant between subjects. How do aphantasics cope with these tests at all? One would think that the presence of a significantly handicapped minority would have become unmissably evident by now.

One extraordinary possibility, I think, is that aphantasia is in reality a kind of mental blindsight. Subjects with blindsight are genuinely unable to see things consciously, but respond to visual tasks with a success rate far better than chance. It seems that while they can’t see consciously, by some other route their unconscious mind still can. It seems tantalisingly possible to me that aphantasics have an equivalent problem with mental images; they do form mental images but are never aware of them. Some might feel that suggestion is nonsensical; doesn’t the very idea of a mental image imply its presence in consciousness? Well, perhaps not: perhaps our subconscious has a much more developed phenomenal life than we have so far realised?

At any rate, expect to hear much more about this…

Red Green circle AnimationSmooth or chunky? Like peanut butter, experience could have different granularities; in practice it seems the answer might be ‘both’. Herzog, Kammer and Scharnowski here propose a novel two-level model in which initial processing is done on a regular stream of fine-grained percepts. Here things get ‘labelled’ with initial colours, durations, and so on, but relatively little of this processing ever becomes conscious. Instead the results lurch into conscious awareness in irregular chunks of up to 400 milliseconds in duration. The result is nevertheless an apparently smooth and seamless flow of experience – the processing edits everything into coherence.

Why adopt such a complex model? What’s wrong with just supposing that percepts roll straight from the senses into the mind, in a continuous sequence? That is after all how things look. The two-level system is designed to resolve a conflict between two clear findings. On the one hand we do have quite fine-grained perception; we can certainly be aware of things that are much shorter than 400ms in duration. On the other, certain interesting effects very strongly suggest that some experiences only enter consciousness after 400ms.

If for example, we display a red circle and then a green one a short distance away, with a delay of 400ms, we do not experience two separate circles, but one that moves and changes colour. In the middle of the move the colour suddenly switches between red and green (see the animation – does that work for you?). But our brain could not have known the colour of the second circle until after it appeared, and so it could not have known half-way through that the circle needed to change. The experience can only have been fed to consciousness after the 400ms was up.

A comparable result is obtained with the intermittent presentation of verniers. These are pairs of lines offset laterally to the right or left. If two different verniers are rapidly alternated, we don’t see both, but a combined version is which the offset is the average of those in the two separate verniers. This effect persists for alternations up to 400ms. Again, since the brain cannot know the second offset until it has appeared, it cannot know what average version to present half-way through; ergo, the experience only becomes conscious after a delay of 400ms.

It seems that even verbal experience works the same way, with a word at the end of a sentence able to smoothly condition our understanding of an ambiguous word (‘mouse’ – rodent or computer peripheral?) if the delay is within 400ms; and there are other examples.

Curiously, the authors make no reference to the famous finding of Libet that our awareness of a decision occurs up to 500ms after it is really made. Libet’s research was about internal perception rather than percepts of external reality, but the similarity of the delay seems striking and surely strengthens the case for the two-level model; it also helps to suggest that we are dealing with an effect which arises from the construction of consciousness, not from the sensory organs or very early processes in the retina or elsewhere.

In general I think the case for a two-level process of some kind is clear and strong, and we’ll set out here. We may reasonably be a little more doubtful about the details of the suggested labelling process; at one point the authors refer to percepts being assigned ‘numbers’; hang on to those quote marks would be my advice.
The authors are quite open about their uncertainty around consciousness itself. They think that the products of initial processing may enter consciousness when they arrive at attractor states, but the details of why and how are not really clear; nor is it clear whether we should think of the products being passed to consciousness (or relabelled as conscious?) when they hit attractor states or becoming conscious simply by virtue of being in an attractor state. We might go so far as to suppose that the second level, consciousness, has no actual location or consistent physical equivalent, merely being the sum of all resolved perceptual states in the brain at any one time.

That points to the wider issue of the Frame Problem, which the paper implicitly raises but does not quite tackle head on. The brain gets fed a very variable set of sensory inputs and manages to craft a beautifully smooth experience out of them (mostly); it looks as if an important part of this must be taking place in the first level processing, but it is a non-trivial task which goes a long way beyond interpolating colours and positions.

The authors do mention the Abhidharma Buddhist view of experience as a series of discrete moments within a flow; we’ve touched on this before in discussions of findings by Varea and others that the flow of consciousness seems to have a regular pulse; it would be intriguing and satisfactory if that pulse could be related to the first level of processing hypothesised here; we’re apparently talking about something in the 100ms range which seems a little on the long side for the time slices proposed; but perhaps a kind of synthesis is possible..?

MachiavelliWhy are we evil? This short piece  asks how the “Dark Tetrad” of behaviours could have evolved.

The Dark Tetrad is an extended version of the Dark Triad of three negative personality traits/behaviours (test yourself here  – I scored ‘infrequently vile’). The original three are ‘Machiavellianism’ – selfishly deceptive, manipulative behaviour; Psychopathy – indifference or failure to perceive the feelings of others; and Narcissism – vain self-obsession. Clearly there’s some overlap and it may not seem clear that these are anything but minor variants on selfishness, but research does suggest that they are distinct. Machiavellians, for example do not over-rate themselves and don’t need to be admired; narcissists aren’t necessarily liars or deceivers; psychopaths are manipulative but don’t really get people.

These three traits account for a good deal of bad behaviour, but it has been suggested that they don’t explain everything; we also need a fourth kind of behaviour, and the leading candidate is ‘Everyday Sadism‘ ; simple pleasure in the suffering of others, regardless of whether it brings any other advantage for oneself. Whether this is ultimately the correct analysis of ‘evil’ behaviour or not, all four types are readily observable in varying degrees. Socially they are all negative, so how could they have evolved?

There doesn’t seem to me to be much mystery about why ‘Machiavellian’ behaviour would evolve (I should acknowledge at this point that using Machiavelli as a synonym for manipulativeness actually understates the subtlety and complexity of his philosophy). Deceiving others in one’s own interests has obvious advantages which are only negated if one is caught. Most of us practice some mild cunning now and then, and the same sort of behaviour is observable in animals, notably our cousins the chimps.

Psychopathy is a little more surprising. Understanding other people, often referred to as ‘theory of mind’ is a key human achievement, though it seems to be shared by some other animals to a degree. However, psychopaths are not left puzzled by their fellow human beings; it’s more that they lack empathy and see others as simply machines whose buttons can freely be pushed. This can be a successful attitude and we are told that somewhat psychopathic traits are commonly found in the successful leaders of large organisations. That raises the question of why we aren’t all psychopaths; my guess is that psycopathic behaviour pays off best in a society where most people are normal; if the proportion grows above a certain small level, the damage done by competition between psychopaths starts to outweigh the benefits and the numbers adjust.

Narcissism is puzzling because narcissists are less self-sufficient than the rest of us and also have deluded ideas about what they can accomplish; neither of these are positive traits in evolutionary terms. One positive side is that narcissists expect a lot from themselves and in the right circumstances they will work hard and behave well in order to protect their own self-image. It may be that in the right context these tendencies win esteem and occasional conspicuous success, and that this offsets the disadvantages.

Finally, sadism. It’s hard to see what benefits accrue to anyone from simply causing pain, detached from any material advantage. Sadism clearly requires theory of mind – if you didn’t realise other people were suffering, there would be no point in hurting them. It’s difficult to know whether there are genuine animal examples. Cats seem to torture mice they have caught, letting them go and instantly catching them again, but to me the behaviour seems automatic or curious, not motivated by any idea that the mice experience pain. Similarly in other cases it generally seems possible to find an alternative motivation.

What evolutionary advantage could sadism confer? Perhaps it makes you more frightening to rivals – but it may also make and motivate enemies. I think in this case we must assume that rather than being a trait with some downsides but some compensating value it is a negative feature that just comes along unavoidably with a large free-running brain. The benefit of consciousness is that it takes us out of the matrix of instinctive and inherited patterns of behaviour and allows detached thought and completely novel responses. In a way Nature took a gamble with consciousness, like a good manager recognising that the good staff might do better if left without specific instructions. On the whole, the bet has paid off handsomely, but it means that the chance of strange and unfavourable behaviour in some cases or on some occasions just has to be accepted. I the case of everyday sadism, the sophisticated theory of mind which human beings have is put to distorted and unhelpful use.

Maybe then, sadism is the most uniquely human kind of evil?

attentionAre our minds being dumbed by digits – or set free by unreading?

Frank Furedi notes  that it has become common to deplore a growing tendency to inattention. In fact, he says, this kind of complaint goes back to the eighteenth century. Very early on the failure to concentrate was treated as a moral failing rather than simple inability; Furedi links this with the idea that attention to proper authority is regarded as a duty, so that inattention amounts to disobedience or disrespect. What has changed more recently, he suggests, is that while inattention was originally regarded as an exceptional problem, it is now seen as our normal state, inevitable: an attitude that can lead to fatalism.

The advent of digital technology has surely affected our view. Since the turn of the century or earlier there have been warnings that constant use of computers, and especially of the Internet, would change the way our brains worked; would damage us intellectually if not morally. Various kinds of damage have been foreseen; shortened attention span, lack of memory, dependence on images, lack of concentration, failure of analytical skills and inability to pull the torrent of snippets into meaningful structures. ‘Digital natives’ might be fluent in social media and habituated to their own strange new world, but there was a price to pay. The emergence of Homo Zappiens has been presented as cause for concern, not celebration.

Equally there have been those who suggest that the warnings are overstated. It would, they say, actually be strange and somewhat disappointing if study habits remained exactly the same after the advent of an instant, universal reference tool; the brain would not be the highly plastic entity we know it to be if it didn’t change its behaviour when presented with the deep interactivity that computers offer; and really it’s time we stopped being surprised that changes in the behaviour of the mind show up as detectable physical changes in the brain.

In many respects, moreover, people are still the same, aren’t they? Nothing much has changed. More undergraduates than ever cope with what is still a pretty traditional education. Young people have not started to find the literature of the dark ages before the 1980s incomprehensible, have they? We may feel at times that contemporary films are dumbed down, but don’t we remember some outstandingly witless stuff from the 1970s and earlier? Furedi seems to doubt that all is well; in fact, he says, undergraduate courses are changing, and are under pressure to change more to accommodate the flighty habits of modern youth who apparently cannot be expected to read whole books. Academics are being urged to pre-digest their courses into sets of easy snippets.

Moreover, a very respectable recent survey of research found that some of the alleged negative effects are well evidenced.

 Growing up with Internet technologies, “Digital Natives” gravitate toward “shallow” information processing behaviors characterized by rapid attention shifting and reduced deliberations. They engage in increased multitasking behaviors that are linked to increased distractibility and poor executive control abilities. Digital natives also exhibit higher prevalence of Internet-related addictive behaviors that reflect altered reward-processing and self-control mechanisms.

So what are we to make of it all? Myself, I take the long view; not just looking back to the early 1700s but also glancing back several thousand years. The human brain has reshaped its modus operandi several times through the arrival of symbols and languages, but the most notable revolution  was surely the advent of reading. Our brains have not had time to evolve special capacities for the fairly recent skill of reading, yet it has become almost universal, regarded as a natural accomplishment almost as natural as walking. It is taken for granted in modern cities – which increasingly is where we all live – that everyone can read. Surely this achievement required a corresponding change in our ability to concentrate?

We are by nature inattentive animals; like all primates we cannot rest easy – as a well-fed lion would do – but have to keep looking for new stimuli to feed our oversized brains. Learning to read, though, and truly absorbing a text, requires steady focus on an essentially linear development of ideas. Now some will point out that even with a large tome, we can skip; inattentive modern students may highlight only the odd significant passage for re-reading as though Plato need really only have written fifty sentences; some courteously self-effacing modern authors tell you which chapters of their work you can ignore if you’re already expert on A, or don’t like formulae, or are only really interested in B. True; but to me those are just the exceptions that highlight the existence of the rule that proper  books require concentration.

No wonder then, that inattention first started to be seriously stigmatised in the eighteenth century, just when we were beginning to get serious about literacy; the same period when a whole new population of literate women became the readership that made the modern novel viable.

Might it not be that what is happening now is that new technology is simply returning us to our natural fidgety state, freed from the discipline of the long, fixed text? Now we can find whatever nugget of information we want without trawling through thousands of words; we can follow eccentric tracks through the intellectual realm like an excitable dog looking for rabbits. This may have its downside, but it has some promising upsides too: we save a lot of time, and we stand a far better chance of pulling together echoes and correspondences from unconnected matters, a kind of synergy which may sometimes be highly productive. Even those old lengthy tomes are now far more easily accessible than they ever were before. The truth is, we hardly know yet where instant unlimited access and high levels of interactivity will take us; but perhaps unreading, shedding some old habits, will be more a liberation than a limitation.

But now I have hit a thousand words, so I’d better shut up.

boxers…for two theories?

Ihtio kindly drew my attention to an interesting paper which sets integrated information theory (IIT) against its own preferred set of ideas – semantic pointer competition (SPC). I’m not quite sure where this ‘one on one’ approach to theoretical discussion comes from. Perhaps the authors see IIT as gaining ground to the extent that any other theory must now take it on directly. The effect is rather of a single bout from some giant knock-out tournament of theories of consciousness (I would totally go for that, incidentally; set it up, somebody!).

We sort of know about IIT by now, but what is SPC? The authors of the paper, Paul Thagard and Terrence C Stewart, suggest that:

consciousness is a neural process resulting from three mechanisms: representation by firing patterns in neural populations, binding of representations into more complex representations called semantic pointers, and competition among semantic pointers to capture the most important aspects of an organism’s current state.

I like the sound of this, and from the start it looks like a contender. My main problem with IIT is that, as was suggested last time, it seems easy enough to imagine that a whole lot of information could be integrated but remain uniluminated by consciousness; it feels as if there needs to be some other functional element; but if we supply that element it looks as if it will end up doing most of the interesting work and relegate the integration process to something secondary or even less important. SPC looks to be foregrounding the kind of process we really need.

The authors provide three basic hypotheses on which SPC rests;

H1. Consciousness is a brain process resulting from neural mechanisms.
H2. The crucial mechanisms for consciousness are: representation by patterns of firing in neural populations, binding of these representations into semantic pointers, and competition among semantic pointers.
H3. Qualitative experiences result from the competition won by semantic pointers that unpack into neural representations of sensory, motor, emotional, and verbal activity.

The particular mention of the brain in H1 is no accident. The authors stress that they are offering a theory of how brains work. Perhaps one day we’ll find aliens or robots who manage some form of consciousness without needing brains, but for now we’re just doing the stuff we know about. “…a theory of consciousness should not be expected to apply to all possible conscious entities.”

Well, actually, I’d sort of like it to – otherwise it raises questions about whether it really is consciousness itself we’re explaining. The real point here, I think, is meant to be a criticism of IIT, namely that it is so entirely substrate-neutral that it happily assigns consciousness to anything that is sufficiently filled with integrated information. Thagard and Stewart want to distance themselves from that, claiming it as a merit of their theory that it only offers consciousness to brains. I sympathise with that to a degree, but if it were me I’d take a slightly different line, resting on the actual functional features they describe rather than simple braininess. The substrate does have to be capable of doing certain things, but there’s no need to assume that only neurons could conceivably do them.

The idea of binding representations into ‘semantic pointers’ is intriguing and seems like the right kind of way to be going; what bothers me most here is how we get the representations in the first place. Not much attention is given to this in the current paper: Thagard and Stewart say neurons that interact with the world and with each other become “tuned” to regularities in the environment. That’s OK, but not really enough. It can’t be that mere interaction is enough, or everything would be a prolific representation of everything around it; but picking out the right “regularities” is a non-trivial task, arguably the real essence of representation.

Competition is the way particular pointers get selected to enter consciousness, according to H2; I’m not exactly sure how that works and I have doubts about whether open competition will do the job. One remarkable thing about consciousness is its coherence and direction, and unregulated competition seems unlikely to produce that, any more than a crowd of people struggling for access to a microphone would produce a fluent monologue. We can imagine that a requirement for coherence is built in, but the mechanism that judges coherence turns out to be rather important and rather difficult to explain.

So does SPC deliver? H3 claims that it gives rise to qualitative experience: the paper splits the issue into two questions: first, why are there all these different experiences, and second, why is there any experience at all? On the first, the answers are fairly good, but not particularly novel or surprising; a diverse range of sensory inputs and patterns of neural firing naturally give rise to a diversity of experience. On the second question, the real Hard Problem, we don’t really get anywhere; it’s suggested that actual experience is an emergent property of the three processes of consciousness. Maybe it is, but that doesn’t really explain it. I can’t seriously criticise Thagard and Stewart because no-one has really done any better with this; but I don’t see that SPC has a particular edge over IIT in this respect either.

Not that their claim to superiority rests on qualia; in fact they bring a range of arguments to suggest that SPC is better at explaining various normal features of consciousness. These vary in strength, in my opinion. First feature up is  how consciousness starts and stops. SPC has a good account, but I think IIT could do a reasonable job, too. The second feature is how consciousness shifts, and this seems a far stronger case; pointers naturally lend themselves better to thus than the gradual shifts you would at first sight expect from a mass of integrated information. Next we have a claim that SPC is better at explaining the different kinds or grades of consciousness that fifteen organisms presumably have. I suppose the natural assumption, given IIT, would be that you either have enough integration for consciousness or you don’t. Finally, it’s claimed that SPC is the winner when it comes to explaining the curious unity/disunity of consciousness. Clearly SPC has some built-in tools for binding, and the authors suggest that competition provides a natural source of fragmentation. They contrast this with Tononi’s concept of quantity of consciousness, an idea they disparage as meaningless in the face of the mental diversity of the organisms in the world.

As I say, I find some of these points stronger than others, but on the whole I think the broad claim that SPC gives a better picture is well founded. To me it seems the advantages of SPC mainly flow from putting representation and pointers at the centre. The dynamic quality this provides, and the spark of intentionality, make it better equipped to explain mental functions than the more austere apparatus of IIT. To me SPC is like a vehicle that needs overhauling and some additional components (some of those not readily available); it doesn’t run just now but you can sort of see how it would. IIT is more like an elegant sculptural form which doesn’t seem to have a place for the wheels.

humoursWorse than wrong? A trenchant piece from Michael Graziano likens many theories of consciousness to the medieval theory of humours; in particular the view that laziness is due to a build up of phlegm. It’s not that the theory is wrong, he says – though it is – it’s that it doesn’t even explain anything.

To be fair I think the theory of the humours was a little more complex than that, and there is at least some kind of hand-waving explanatory connection between the heaviness of phlegm and slowness of response. According to Graziano such theories flatter our intuitions; they offer a vague analogy which feels metaphorically sort of right – but, on examination, no real mechanism. His general point is surely very sound; there are indeed too many theories about conscious experience that describe a reasonably plausible process without ever quite explaining how the process magically gives rise to actual feeling, to the ineffable phenomenology.

As an example, Graziano mentions a theory that neural oscillations are responsible for consciousness; I think he has in mind the view espoused by Francis Crick and others that oscillations at 40 hertz give rise to awareness. This idea was immensely popular at one time and people did talk about “40 hertz” as though it was a magic key. Of course it would have been legitimate to present this as an enigmatic empirical finding, but the claim seemed to be that it was an answer rather than an additional question. So far as I know Graziano is right to say that no-one ever offered a clear view as to why 40 hertz had this exceptional property, rather than 30 or 50, or for that matter why co-ordinated oscillation at any frequency should generate consciousness. It is sort of plausible that harmonising on a given frequency might make parts of the brain work together in some ways, and people sometimes took the view that synchronised firing might, for example, help explain the binding problem – the question of how inputs from different senses arriving at different times give rise to a smooth and flawlessly co-ordinated experience. Still, at best working in harmony might explain some features of experience: it’s hard to see how in itself it could provide any explanation of the origin or essential nature of consciousness. It just isn’t the right kind of thing.

As a second example Graziano boldly denounces theories based on integrated information. Yes, consciousness is certainly going to require the integration of a lot of information, but that seems to be a necessary, not a sufficient condition. Intuitively we sort of imagine a computer getting larger and more complex until, somehow, it wakes up. But why would integrating any amount of information suddenly change its inward nature? Graziano notes that some would say dim sparks of awareness are everywhere, so that linking them gives us progressively brighter arrays. That, however, is no explanation, just an even worse example of phlegm.

So how does Graziano explain consciousness? He concedes that he too has no brilliant resolution of the central mystery. He proposes instead a project which asks, not why we have subjective experience, but why we think we do: why we say we do with such conviction. The answer, he suggests, is in metacognition. (This idea will not be new to readers who are acquainted with Scott Bakker’s Blind Brain Theory.) The mind makes models of the world and models of itself, and it is these inaccurate models and the information we generate from them that makes us see something magic about experience. In the brief account here I’m not really sure Graziano succeeds in making this seem more clear-cut than the theories he denounces. I suppose the parallel existence of reality and a mental model of reality might plausibly give rise to an impression that there is something in our experience over and above simple knowledge of the world; but I’m left a little nervous about whether that isn’t another example of the kind of intuition-flattering the other theories provide.

This kind of metacognitive theory tends naturally to be a sceptical theory; our conviction that we have subjective experience proceeds from an error or a defective model, so the natural conclusion, on grounds of parsimony if no others, is that we are mistaken and there is really nothing special about our brain’s data processing after all.

That may be the natural conclusion, but in other respects it’s hard to accept. It’s easy to believe that we might be mistaken about what we’re experiencing, but can we doubt that we’re having an experience of some kind? We seem to run into quasi-Cartesian difficulties.

Be that as it may Graziano deserves a round of applause for his bold (but not bilious) denunciation of the phlegm.