antInsects are conscious: in fact they were the first conscious entities. At least, Barron and Klein think so.  The gist of the argument, which draws on the theories of Bjorn Merker is based on the idea that subjective consciousness arises from certain brain systems that create a model of the organism in the world. The authors suggest that the key part of the invertebrate brain for these purposes is the midbrain; insects do not, in fact, have a direct structural analogue,, but the authors argue that they have others that evidently generate the same kind of unified model; it should therefore be presumed that they have consciousness.

Of course, it’s usually the cortex that gets credit for the ‘higher’ forms of cognition, and it does seem to be responsible for a lot of the fancier stuff. Barron and Klein however, argue that damage to the midbrain tends to be fatal to consciousness, while damage to the cortex can leave it impaired in content but essentially intact. They propose that the midbrain integrates two different sets of inputs; external sensory ones make their way down via the colliculus while internal messages about the state of the organism come up via the hypothalamus; nuclei in the middle bring them together in a model of the world around the organism which guides its behaviour. It’s that centralised model that produces subjective consciousness. Organisms that respond directly to stimuli in a decentralised way may still produce complex behaviour but they lack consciousness, as do those that centralise the processing but lack the required model.

Traditionally it has often been assumed that the insect nervous system is decentralised; but Barron and Klein say this view is outdated and they present evidence that although the structures are different, the central complex of the insect system integrates external and internal data, forming a model which is used to control behaviour in very much the same kind of process seen in vertebrates. This seems convincing enough to me; interestingly the recruitment of insects means that the nature of the argument changes into something more abstract and functional.

Does it work, though? Why would a model with this kind of functional property give rise to consciousness – and what kind of consciousness are we talking about? The authors make it clear that they are not concerned with reflective consciousness or any variety of higher-order consciousness, where we know that we know and are aware of our awareness. They say what they’re after is basic subjective consciousness and they speak of there being ‘something it is like’, the phrase used by Nagel which has come to define qualia, the subjective items of experience. However, Barron and Klein cannot be describing qualia-style consciousness. To see why, consider two of the thought-experiments defining qualia. Chalmers’s zombie twin is physically exactly like Chalmers, yet lacks qualia. Mary the colour scientist knows all the science about colour vision there could ever be, but she doesn’t know qualia. It follows rather strongly that no anatomical evidence can ever show whether or not any creature has qualia. If possession of a human brain doesn’t clinch the case for the zombie, broadly similar structures in other organisms can hardly do so; if science doesn’t tell Mary about qualia it can’t tell us either.

It seems possible that Barron and Klein are actually hunting a non-qualic kind of subjective consciousness, which would be a perfectly respectable project; but the fact that their consciousness arises out of a model which helps determine behaviour suggests to me that they are really in pursuit of what Ned Block characterised as access consciousness; the sort that actually gets decisions made rather than the sort that gives rise to ineffable feels.

It does make sense that a model might be essential to that; by setting up a model the brain has sort of created a world of its own, which sounds sort of like what consciousness does.
Is it enough though? Suppose we talk about robots for a moment; if we had a machine that created a basic model of the world and used it to govern its progress through the world, would we say it was conscious? I rather doubt it; such robots are not unknown and sometimes they are relatively simple. It might do no more than scan the position of some blocks and calculate a path between them; perhaps we should call that rudimentary consciousness, but it doesn’t seem persuasive.

Briefly, I suspect there is a missing ingredient. It may well be true that a unified model of the world is necessary for consciousness, but I doubt that it’s sufficient. My guess is that one or both of the following is also necessary: first, the right kind of complexity in the processing of the model; second, the right kind of relations between the model and the world – in particular, I’d suggest there has to be intentionality. Barron and Klein might contend that the kind of model they have in mind delivers that, or that another system can do so, but I think there are some important further things to be clarified before I welcome insects into the family of the conscious.

phantasyPeople who cannot form mental images? ‘Aphantasia’ is an extraordinary new discovery; Carl Zimmer and Adam Zeman seem between them to have uncovered a fascinating and previously unknown mental deficit (although there is a suggestion that Galton and others may have been aware of it earlier).

What is this aphantasia? In essence, no pictures in the head. Aphantasics cannot ‘see’ mental images of things that are not actually present in front of their eyes. Once the possibility received publicity Zimmer and Zeman began to hear from a stream of people who believe they have this condition. It seems people manage quite well with it and few had ever noticed anything wrong – there’s an interesting cri de coeur from one such sufferer here. Such people assume that talk of mental images is metaphorical or figurative and that others, like them, really only deal in colourless facts. It was the discovery of a man who had lost the visualising ability through injury that first brought it to notice: a minority of people who read about his problem thought it was more remarkable that he had ever been able to form mental images than that he now could not.

Some caution is surely in order. When a new disease or disability comes along there are usually people who sincerely convince themselves that they are sufferers without really having the condition. Some might be mistaken. Moreover, the phenomenology of vision has never been adequately clarified, and I strongly suspect it is more complex than we realise. There are, I think, several different senses in which you can form a mental image; those images may vary in how visually explicit they are, and it could well be that not all aphantasics are suffering the same deficits.

However that may be, it seems truly remarkable that such a significant problem could have passed unnoticed for so long. Spatial visualisation is hardly a recondite capacity; it is often subject to testing. One kind of widely used test presents the subject with a drawing of a 3D shape and a selection of others that resemble it. One is a perfect rotated copy of the original shape, and subjects are asked to pick it out. There is very good evidence that people solve these problems by mentally rotating an image of the target shape; shapes rotated 180 degrees regularly take twice as long to spot as ones that have been rotated 90; moreover the speed of mental rotation appears to be surprisingly constant between subjects. How do aphantasics cope with these tests at all? One would think that the presence of a significantly handicapped minority would have become unmissably evident by now.

One extraordinary possibility, I think, is that aphantasia is in reality a kind of mental blindsight. Subjects with blindsight are genuinely unable to see things consciously, but respond to visual tasks with a success rate far better than chance. It seems that while they can’t see consciously, by some other route their unconscious mind still can. It seems tantalisingly possible to me that aphantasics have an equivalent problem with mental images; they do form mental images but are never aware of them. Some might feel that suggestion is nonsensical; doesn’t the very idea of a mental image imply its presence in consciousness? Well, perhaps not: perhaps our subconscious has a much more developed phenomenal life than we have so far realised?

At any rate, expect to hear much more about this…

Red Green circle AnimationSmooth or chunky? Like peanut butter, experience could have different granularities; in practice it seems the answer might be ‘both’. Herzog, Kammer and Scharnowski here propose a novel two-level model in which initial processing is done on a regular stream of fine-grained percepts. Here things get ‘labelled’ with initial colours, durations, and so on, but relatively little of this processing ever becomes conscious. Instead the results lurch into conscious awareness in irregular chunks of up to 400 milliseconds in duration. The result is nevertheless an apparently smooth and seamless flow of experience – the processing edits everything into coherence.

Why adopt such a complex model? What’s wrong with just supposing that percepts roll straight from the senses into the mind, in a continuous sequence? That is after all how things look. The two-level system is designed to resolve a conflict between two clear findings. On the one hand we do have quite fine-grained perception; we can certainly be aware of things that are much shorter than 400ms in duration. On the other, certain interesting effects very strongly suggest that some experiences only enter consciousness after 400ms.

If for example, we display a red circle and then a green one a short distance away, with a delay of 400ms, we do not experience two separate circles, but one that moves and changes colour. In the middle of the move the colour suddenly switches between red and green (see the animation – does that work for you?). But our brain could not have known the colour of the second circle until after it appeared, and so it could not have known half-way through that the circle needed to change. The experience can only have been fed to consciousness after the 400ms was up.

A comparable result is obtained with the intermittent presentation of verniers. These are pairs of lines offset laterally to the right or left. If two different verniers are rapidly alternated, we don’t see both, but a combined version is which the offset is the average of those in the two separate verniers. This effect persists for alternations up to 400ms. Again, since the brain cannot know the second offset until it has appeared, it cannot know what average version to present half-way through; ergo, the experience only becomes conscious after a delay of 400ms.

It seems that even verbal experience works the same way, with a word at the end of a sentence able to smoothly condition our understanding of an ambiguous word (‘mouse’ – rodent or computer peripheral?) if the delay is within 400ms; and there are other examples.

Curiously, the authors make no reference to the famous finding of Libet that our awareness of a decision occurs up to 500ms after it is really made. Libet’s research was about internal perception rather than percepts of external reality, but the similarity of the delay seems striking and surely strengthens the case for the two-level model; it also helps to suggest that we are dealing with an effect which arises from the construction of consciousness, not from the sensory organs or very early processes in the retina or elsewhere.

In general I think the case for a two-level process of some kind is clear and strong, and we’ll set out here. We may reasonably be a little more doubtful about the details of the suggested labelling process; at one point the authors refer to percepts being assigned ‘numbers’; hang on to those quote marks would be my advice.
The authors are quite open about their uncertainty around consciousness itself. They think that the products of initial processing may enter consciousness when they arrive at attractor states, but the details of why and how are not really clear; nor is it clear whether we should think of the products being passed to consciousness (or relabelled as conscious?) when they hit attractor states or becoming conscious simply by virtue of being in an attractor state. We might go so far as to suppose that the second level, consciousness, has no actual location or consistent physical equivalent, merely being the sum of all resolved perceptual states in the brain at any one time.

That points to the wider issue of the Frame Problem, which the paper implicitly raises but does not quite tackle head on. The brain gets fed a very variable set of sensory inputs and manages to craft a beautifully smooth experience out of them (mostly); it looks as if an important part of this must be taking place in the first level processing, but it is a non-trivial task which goes a long way beyond interpolating colours and positions.

The authors do mention the Abhidharma Buddhist view of experience as a series of discrete moments within a flow; we’ve touched on this before in discussions of findings by Varea and others that the flow of consciousness seems to have a regular pulse; it would be intriguing and satisfactory if that pulse could be related to the first level of processing hypothesised here; we’re apparently talking about something in the 100ms range which seems a little on the long side for the time slices proposed; but perhaps a kind of synthesis is possible..?

MachiavelliWhy are we evil? This short piece  asks how the “Dark Tetrad” of behaviours could have evolved.

The Dark Tetrad is an extended version of the Dark Triad of three negative personality traits/behaviours (test yourself here  – I scored ‘infrequently vile’). The original three are ‘Machiavellianism’ – selfishly deceptive, manipulative behaviour; Psychopathy – indifference or failure to perceive the feelings of others; and Narcissism – vain self-obsession. Clearly there’s some overlap and it may not seem clear that these are anything but minor variants on selfishness, but research does suggest that they are distinct. Machiavellians, for example do not over-rate themselves and don’t need to be admired; narcissists aren’t necessarily liars or deceivers; psychopaths are manipulative but don’t really get people.

These three traits account for a good deal of bad behaviour, but it has been suggested that they don’t explain everything; we also need a fourth kind of behaviour, and the leading candidate is ‘Everyday Sadism‘ ; simple pleasure in the suffering of others, regardless of whether it brings any other advantage for oneself. Whether this is ultimately the correct analysis of ‘evil’ behaviour or not, all four types are readily observable in varying degrees. Socially they are all negative, so how could they have evolved?

There doesn’t seem to me to be much mystery about why ‘Machiavellian’ behaviour would evolve (I should acknowledge at this point that using Machiavelli as a synonym for manipulativeness actually understates the subtlety and complexity of his philosophy). Deceiving others in one’s own interests has obvious advantages which are only negated if one is caught. Most of us practice some mild cunning now and then, and the same sort of behaviour is observable in animals, notably our cousins the chimps.

Psychopathy is a little more surprising. Understanding other people, often referred to as ‘theory of mind’ is a key human achievement, though it seems to be shared by some other animals to a degree. However, psychopaths are not left puzzled by their fellow human beings; it’s more that they lack empathy and see others as simply machines whose buttons can freely be pushed. This can be a successful attitude and we are told that somewhat psychopathic traits are commonly found in the successful leaders of large organisations. That raises the question of why we aren’t all psychopaths; my guess is that psycopathic behaviour pays off best in a society where most people are normal; if the proportion grows above a certain small level, the damage done by competition between psychopaths starts to outweigh the benefits and the numbers adjust.

Narcissism is puzzling because narcissists are less self-sufficient than the rest of us and also have deluded ideas about what they can accomplish; neither of these are positive traits in evolutionary terms. One positive side is that narcissists expect a lot from themselves and in the right circumstances they will work hard and behave well in order to protect their own self-image. It may be that in the right context these tendencies win esteem and occasional conspicuous success, and that this offsets the disadvantages.

Finally, sadism. It’s hard to see what benefits accrue to anyone from simply causing pain, detached from any material advantage. Sadism clearly requires theory of mind – if you didn’t realise other people were suffering, there would be no point in hurting them. It’s difficult to know whether there are genuine animal examples. Cats seem to torture mice they have caught, letting them go and instantly catching them again, but to me the behaviour seems automatic or curious, not motivated by any idea that the mice experience pain. Similarly in other cases it generally seems possible to find an alternative motivation.

What evolutionary advantage could sadism confer? Perhaps it makes you more frightening to rivals – but it may also make and motivate enemies. I think in this case we must assume that rather than being a trait with some downsides but some compensating value it is a negative feature that just comes along unavoidably with a large free-running brain. The benefit of consciousness is that it takes us out of the matrix of instinctive and inherited patterns of behaviour and allows detached thought and completely novel responses. In a way Nature took a gamble with consciousness, like a good manager recognising that the good staff might do better if left without specific instructions. On the whole, the bet has paid off handsomely, but it means that the chance of strange and unfavourable behaviour in some cases or on some occasions just has to be accepted. I the case of everyday sadism, the sophisticated theory of mind which human beings have is put to distorted and unhelpful use.

Maybe then, sadism is the most uniquely human kind of evil?

attentionAre our minds being dumbed by digits – or set free by unreading?

Frank Furedi notes  that it has become common to deplore a growing tendency to inattention. In fact, he says, this kind of complaint goes back to the eighteenth century. Very early on the failure to concentrate was treated as a moral failing rather than simple inability; Furedi links this with the idea that attention to proper authority is regarded as a duty, so that inattention amounts to disobedience or disrespect. What has changed more recently, he suggests, is that while inattention was originally regarded as an exceptional problem, it is now seen as our normal state, inevitable: an attitude that can lead to fatalism.

The advent of digital technology has surely affected our view. Since the turn of the century or earlier there have been warnings that constant use of computers, and especially of the Internet, would change the way our brains worked; would damage us intellectually if not morally. Various kinds of damage have been foreseen; shortened attention span, lack of memory, dependence on images, lack of concentration, failure of analytical skills and inability to pull the torrent of snippets into meaningful structures. ‘Digital natives’ might be fluent in social media and habituated to their own strange new world, but there was a price to pay. The emergence of Homo Zappiens has been presented as cause for concern, not celebration.

Equally there have been those who suggest that the warnings are overstated. It would, they say, actually be strange and somewhat disappointing if study habits remained exactly the same after the advent of an instant, universal reference tool; the brain would not be the highly plastic entity we know it to be if it didn’t change its behaviour when presented with the deep interactivity that computers offer; and really it’s time we stopped being surprised that changes in the behaviour of the mind show up as detectable physical changes in the brain.

In many respects, moreover, people are still the same, aren’t they? Nothing much has changed. More undergraduates than ever cope with what is still a pretty traditional education. Young people have not started to find the literature of the dark ages before the 1980s incomprehensible, have they? We may feel at times that contemporary films are dumbed down, but don’t we remember some outstandingly witless stuff from the 1970s and earlier? Furedi seems to doubt that all is well; in fact, he says, undergraduate courses are changing, and are under pressure to change more to accommodate the flighty habits of modern youth who apparently cannot be expected to read whole books. Academics are being urged to pre-digest their courses into sets of easy snippets.

Moreover, a very respectable recent survey of research found that some of the alleged negative effects are well evidenced.

 Growing up with Internet technologies, “Digital Natives” gravitate toward “shallow” information processing behaviors characterized by rapid attention shifting and reduced deliberations. They engage in increased multitasking behaviors that are linked to increased distractibility and poor executive control abilities. Digital natives also exhibit higher prevalence of Internet-related addictive behaviors that reflect altered reward-processing and self-control mechanisms.

So what are we to make of it all? Myself, I take the long view; not just looking back to the early 1700s but also glancing back several thousand years. The human brain has reshaped its modus operandi several times through the arrival of symbols and languages, but the most notable revolution  was surely the advent of reading. Our brains have not had time to evolve special capacities for the fairly recent skill of reading, yet it has become almost universal, regarded as a natural accomplishment almost as natural as walking. It is taken for granted in modern cities – which increasingly is where we all live – that everyone can read. Surely this achievement required a corresponding change in our ability to concentrate?

We are by nature inattentive animals; like all primates we cannot rest easy – as a well-fed lion would do – but have to keep looking for new stimuli to feed our oversized brains. Learning to read, though, and truly absorbing a text, requires steady focus on an essentially linear development of ideas. Now some will point out that even with a large tome, we can skip; inattentive modern students may highlight only the odd significant passage for re-reading as though Plato need really only have written fifty sentences; some courteously self-effacing modern authors tell you which chapters of their work you can ignore if you’re already expert on A, or don’t like formulae, or are only really interested in B. True; but to me those are just the exceptions that highlight the existence of the rule that proper  books require concentration.

No wonder then, that inattention first started to be seriously stigmatised in the eighteenth century, just when we were beginning to get serious about literacy; the same period when a whole new population of literate women became the readership that made the modern novel viable.

Might it not be that what is happening now is that new technology is simply returning us to our natural fidgety state, freed from the discipline of the long, fixed text? Now we can find whatever nugget of information we want without trawling through thousands of words; we can follow eccentric tracks through the intellectual realm like an excitable dog looking for rabbits. This may have its downside, but it has some promising upsides too: we save a lot of time, and we stand a far better chance of pulling together echoes and correspondences from unconnected matters, a kind of synergy which may sometimes be highly productive. Even those old lengthy tomes are now far more easily accessible than they ever were before. The truth is, we hardly know yet where instant unlimited access and high levels of interactivity will take us; but perhaps unreading, shedding some old habits, will be more a liberation than a limitation.

But now I have hit a thousand words, so I’d better shut up.

boxers…for two theories?

Ihtio kindly drew my attention to an interesting paper which sets integrated information theory (IIT) against its own preferred set of ideas – semantic pointer competition (SPC). I’m not quite sure where this ‘one on one’ approach to theoretical discussion comes from. Perhaps the authors see IIT as gaining ground to the extent that any other theory must now take it on directly. The effect is rather of a single bout from some giant knock-out tournament of theories of consciousness (I would totally go for that, incidentally; set it up, somebody!).

We sort of know about IIT by now, but what is SPC? The authors of the paper, Paul Thagard and Terrence C Stewart, suggest that:

consciousness is a neural process resulting from three mechanisms: representation by firing patterns in neural populations, binding of representations into more complex representations called semantic pointers, and competition among semantic pointers to capture the most important aspects of an organism’s current state.

I like the sound of this, and from the start it looks like a contender. My main problem with IIT is that, as was suggested last time, it seems easy enough to imagine that a whole lot of information could be integrated but remain uniluminated by consciousness; it feels as if there needs to be some other functional element; but if we supply that element it looks as if it will end up doing most of the interesting work and relegate the integration process to something secondary or even less important. SPC looks to be foregrounding the kind of process we really need.

The authors provide three basic hypotheses on which SPC rests;

H1. Consciousness is a brain process resulting from neural mechanisms.
H2. The crucial mechanisms for consciousness are: representation by patterns of firing in neural populations, binding of these representations into semantic pointers, and competition among semantic pointers.
H3. Qualitative experiences result from the competition won by semantic pointers that unpack into neural representations of sensory, motor, emotional, and verbal activity.

The particular mention of the brain in H1 is no accident. The authors stress that they are offering a theory of how brains work. Perhaps one day we’ll find aliens or robots who manage some form of consciousness without needing brains, but for now we’re just doing the stuff we know about. “…a theory of consciousness should not be expected to apply to all possible conscious entities.”

Well, actually, I’d sort of like it to – otherwise it raises questions about whether it really is consciousness itself we’re explaining. The real point here, I think, is meant to be a criticism of IIT, namely that it is so entirely substrate-neutral that it happily assigns consciousness to anything that is sufficiently filled with integrated information. Thagard and Stewart want to distance themselves from that, claiming it as a merit of their theory that it only offers consciousness to brains. I sympathise with that to a degree, but if it were me I’d take a slightly different line, resting on the actual functional features they describe rather than simple braininess. The substrate does have to be capable of doing certain things, but there’s no need to assume that only neurons could conceivably do them.

The idea of binding representations into ‘semantic pointers’ is intriguing and seems like the right kind of way to be going; what bothers me most here is how we get the representations in the first place. Not much attention is given to this in the current paper: Thagard and Stewart say neurons that interact with the world and with each other become “tuned” to regularities in the environment. That’s OK, but not really enough. It can’t be that mere interaction is enough, or everything would be a prolific representation of everything around it; but picking out the right “regularities” is a non-trivial task, arguably the real essence of representation.

Competition is the way particular pointers get selected to enter consciousness, according to H2; I’m not exactly sure how that works and I have doubts about whether open competition will do the job. One remarkable thing about consciousness is its coherence and direction, and unregulated competition seems unlikely to produce that, any more than a crowd of people struggling for access to a microphone would produce a fluent monologue. We can imagine that a requirement for coherence is built in, but the mechanism that judges coherence turns out to be rather important and rather difficult to explain.

So does SPC deliver? H3 claims that it gives rise to qualitative experience: the paper splits the issue into two questions: first, why are there all these different experiences, and second, why is there any experience at all? On the first, the answers are fairly good, but not particularly novel or surprising; a diverse range of sensory inputs and patterns of neural firing naturally give rise to a diversity of experience. On the second question, the real Hard Problem, we don’t really get anywhere; it’s suggested that actual experience is an emergent property of the three processes of consciousness. Maybe it is, but that doesn’t really explain it. I can’t seriously criticise Thagard and Stewart because no-one has really done any better with this; but I don’t see that SPC has a particular edge over IIT in this respect either.

Not that their claim to superiority rests on qualia; in fact they bring a range of arguments to suggest that SPC is better at explaining various normal features of consciousness. These vary in strength, in my opinion. First feature up is  how consciousness starts and stops. SPC has a good account, but I think IIT could do a reasonable job, too. The second feature is how consciousness shifts, and this seems a far stronger case; pointers naturally lend themselves better to thus than the gradual shifts you would at first sight expect from a mass of integrated information. Next we have a claim that SPC is better at explaining the different kinds or grades of consciousness that fifteen organisms presumably have. I suppose the natural assumption, given IIT, would be that you either have enough integration for consciousness or you don’t. Finally, it’s claimed that SPC is the winner when it comes to explaining the curious unity/disunity of consciousness. Clearly SPC has some built-in tools for binding, and the authors suggest that competition provides a natural source of fragmentation. They contrast this with Tononi’s concept of quantity of consciousness, an idea they disparage as meaningless in the face of the mental diversity of the organisms in the world.

As I say, I find some of these points stronger than others, but on the whole I think the broad claim that SPC gives a better picture is well founded. To me it seems the advantages of SPC mainly flow from putting representation and pointers at the centre. The dynamic quality this provides, and the spark of intentionality, make it better equipped to explain mental functions than the more austere apparatus of IIT. To me SPC is like a vehicle that needs overhauling and some additional components (some of those not readily available); it doesn’t run just now but you can sort of see how it would. IIT is more like an elegant sculptural form which doesn’t seem to have a place for the wheels.

humoursWorse than wrong? A trenchant piece from Michael Graziano likens many theories of consciousness to the medieval theory of humours; in particular the view that laziness is due to a build up of phlegm. It’s not that the theory is wrong, he says – though it is – it’s that it doesn’t even explain anything.

To be fair I think the theory of the humours was a little more complex than that, and there is at least some kind of hand-waving explanatory connection between the heaviness of phlegm and slowness of response. According to Graziano such theories flatter our intuitions; they offer a vague analogy which feels metaphorically sort of right – but, on examination, no real mechanism. His general point is surely very sound; there are indeed too many theories about conscious experience that describe a reasonably plausible process without ever quite explaining how the process magically gives rise to actual feeling, to the ineffable phenomenology.

As an example, Graziano mentions a theory that neural oscillations are responsible for consciousness; I think he has in mind the view espoused by Francis Crick and others that oscillations at 40 hertz give rise to awareness. This idea was immensely popular at one time and people did talk about “40 hertz” as though it was a magic key. Of course it would have been legitimate to present this as an enigmatic empirical finding, but the claim seemed to be that it was an answer rather than an additional question. So far as I know Graziano is right to say that no-one ever offered a clear view as to why 40 hertz had this exceptional property, rather than 30 or 50, or for that matter why co-ordinated oscillation at any frequency should generate consciousness. It is sort of plausible that harmonising on a given frequency might make parts of the brain work together in some ways, and people sometimes took the view that synchronised firing might, for example, help explain the binding problem – the question of how inputs from different senses arriving at different times give rise to a smooth and flawlessly co-ordinated experience. Still, at best working in harmony might explain some features of experience: it’s hard to see how in itself it could provide any explanation of the origin or essential nature of consciousness. It just isn’t the right kind of thing.

As a second example Graziano boldly denounces theories based on integrated information. Yes, consciousness is certainly going to require the integration of a lot of information, but that seems to be a necessary, not a sufficient condition. Intuitively we sort of imagine a computer getting larger and more complex until, somehow, it wakes up. But why would integrating any amount of information suddenly change its inward nature? Graziano notes that some would say dim sparks of awareness are everywhere, so that linking them gives us progressively brighter arrays. That, however, is no explanation, just an even worse example of phlegm.

So how does Graziano explain consciousness? He concedes that he too has no brilliant resolution of the central mystery. He proposes instead a project which asks, not why we have subjective experience, but why we think we do: why we say we do with such conviction. The answer, he suggests, is in metacognition. (This idea will not be new to readers who are acquainted with Scott Bakker’s Blind Brain Theory.) The mind makes models of the world and models of itself, and it is these inaccurate models and the information we generate from them that makes us see something magic about experience. In the brief account here I’m not really sure Graziano succeeds in making this seem more clear-cut than the theories he denounces. I suppose the parallel existence of reality and a mental model of reality might plausibly give rise to an impression that there is something in our experience over and above simple knowledge of the world; but I’m left a little nervous about whether that isn’t another example of the kind of intuition-flattering the other theories provide.

This kind of metacognitive theory tends naturally to be a sceptical theory; our conviction that we have subjective experience proceeds from an error or a defective model, so the natural conclusion, on grounds of parsimony if no others, is that we are mistaken and there is really nothing special about our brain’s data processing after all.

That may be the natural conclusion, but in other respects it’s hard to accept. It’s easy to believe that we might be mistaken about what we’re experiencing, but can we doubt that we’re having an experience of some kind? We seem to run into quasi-Cartesian difficulties.

Be that as it may Graziano deserves a round of applause for his bold (but not bilious) denunciation of the phlegm.

backwardIn the course of this review of Carlo Rovelli’s Seven Brief Lessons on Physics, Alva Noë asks a fascinating question: does it make any sense to imagine that one might think backward?

In physics we are accustomed to thinking that entropy always increases; things run down, spread out, differences are evened out and available energy always decreases. Are cognitive processes like that, always going in one underlying direction? Are they like a river which flows to the sea, or are they more like a path we can take in either direction?

Well, we can’t talk backwards without careful preparation, and some kinds of conscious thought resemble talking to oneself. In fact, we can’t perform most tasks backwards without rehearsal. Unless one takes the trouble to learn zedabetical order, we cannot easily recite our letters in reverse. So it doesn’t seem we can do that kind of backward thinking, at least. We can, of course, read a written sentence in the reverse word order and I suppose that in that sense we can think the same sentence backwards, or ‘backwards sentence same the think’; but we might well doubt whether that truly reverses the thought.

On the other hand, on another level, there seems to be no inherent direction to a chain of thought. If I think of an egg, it makes me think of breakfast, and breakfast makes me think of getting up; but thinking about getting up might equally have prompted thoughts of breakfast, which in turn might have brought an egg to mind. The association of ideas goes both ways. Logical thought needs a little more care. ‘If p then q’ does not allow us to conclude that if q then p – but we can say that not-q gives us not-p. With due attention to the details there doesn’t seem to be a built-in direction to deduction.

The universal presence of memory in all our thoughts seems a deal-breaker on reversibility, though. We cannot forget things at will; memories constantly accumulate; and we cannot deliberately unthink a thought in such a way as to erase it from recollection. This one-way effect seems to be equally true of certain kinds of thought on another level. We have, let’s say, a problem in mind; all the clues we need are known. At some point they come together to point to the right conclusion – Aha! Unless we find a flaw in our reasoning, the meshing of the clues cannot be unpicked. We can’t ununderstand; the process of comprehension seems as irrevocable as the breaking of an eggshell or the cooking of a omelette.

There’s an odd thing about it though. The breaking of the egg is an example of the wider principle of entropy; the shell is destroyed, and later the protein is denatured by heat. The solving of the problem, by contrast, is constructive; we’re left with more than we had before and our mental contents are better structured, not worse. Learning and understanding seems like a process of growth. Like the growth of a plant it is of course just a local reversal of entropy, which has to be paid for through the use of a lot of irretrievable energy; still, it’s remarkable (as is the growth of a plant, after all).

Hang on, though. Isn’t it the case that we might have been entertaining a dozen different ideas about the possible solution to our problem?  Once we have the answer those other hypotheses are dropped and indeed may even be truly forgotten. More than that, the right answer may let us simplify our ideas, the way Copernicus let us do away with epicycles and all the rest of the Ptolemaic system nobody has to think about any more. Occam tells us to minimise the number of angels we require for our theory, so isn’t the growth of understanding sometimes a synthesis which actually has the character of a reductive simplification? That isn’t usually a reversal as such, but doesn’t it involve in some sense the unthinking of certain thoughts? David Lodge somewhere has a character feel a pang of pity for all the Marxist professors in the universities of no-longer-communist countries who must presumably unspool years of patient theoretical exegesis in order to start understanding the world again.

Well, yes, but I don’t think that is truly an unthinking or reverse cogitation. Is it perhaps more like a plant managing to grow downwards? So no, as Noë implied in the first place, it doesn’t really make sense to imagine we might think backward.

observationAre we being watched? Over at Aeon, George Musser asks whether some AI could quietly become conscious without our realising it. After all, it might not feel the need to stop whatever it was doing and announce itself. If it thought about the matter at all, it might think it was prudent to remain unobserved. It might have another form of consciousness, not readily recognisable to us. For that matter, we might not be readily recognisable to it, so that perhaps it would seem to itself to be alone in a solipsistic universe, with no need to try to communicate with anyone.

There have been various scenarios about this kind of thing in the past which I think we can dismiss without too much hesitation. I don’t think the internet is going to attain self-awareness because however complex it may become, its simply isn’t organised in the right kind of way. I don’t think any conventional digital computer is going to become conscious either, for similar reasons.

I think consciousness is basically an expanded development of the faculty of recognition. Animals have gradually evolved the ability to recognises very complex extended stimuli; in the case of human beings things have gone a massive leap further so that we can recognise abstractions and generalities. This makes a qualitative change because we are no longer reliant on what is coming in through our sense from the immediate environment; we can think about anything, even imaginary or nonsensical things.

I think this kind of recognition has an open-ended quality which means it can’t be directly written into a functional system; you can’t just code it up or design the mechanism. So no machines have been really good candidates; until recently. These days I think some AI systems are moving into a space where they learn for themselves in a way which may be supported by their form and the algorithms that back them up, but which does have some of the open-ended qualities of real cognition. My perception is that we’re still a long way from any artificial entity growing into consciousness; but it’s no longer a possibility which can be dismissed without consideration; so a good time for George to be asking the question.

How would it happen? I think we have to imagine that a very advanced AI system has been set to deal with a very complex problem. The system begins to evolve approaches which yield results and it turns out that conscious thought – the kind of detachment from immediate inputs I referred to above – is essential. Bit by bit (ha) the system moves towards it.

I would not absolutely rule out something like that; but I think it is extremely unlikely that the researchers would fail to notice what was happening.

First, I doubt whether there can be forms of consciousness which are unrecognisable to us. If I’m right consciousness is a kind of function which yields purposeful output behaviour, and purposefulness implies intelligibility. We would just be able to see what it was up to. Some qualifications to this conclusion are necessary. We’ve already had chess AIs that play certain end-games in ways that don’t make much sense to human observers, even chess masters, and look like random flailing. We might get some patterns of behaviour like that. But the chess ‘flailing’ leads reliably to mate, which ultimately is surely noticeable. Another point to bear in mind is that our consciousness was shaped by evolution, and by the competition for food, safety, and reproduction. The supposed AI would have evolved in consciousness in response to completely different imperatives, which might well make some qualitative difference. The thoughts of the AI might not look quite like human cognition.  Nevertheless I still think the intentionality of the AI’s outputs could not help but be recognisable. In fact the researchers who set the thing up would presumably have the advantage of knowing the goals which had been set.

Second, we are really strongly predisposed to recognising minds. Meaningless whistling of the wind sounds like voices to us; random marks look like faces; anything that is vaguely humanoid in form or moves around like a sentient creature is quickly anthropomorphised by us and awarded an imaginary personality. We are far more likely to attribute personhood to a dumb robot than dumbness to one with true consciousness. So I don’t think it is particularly likely that a conscious entity could evolve without our knowing it and keep a covert, wary eye on us. It’s much more likely to be the other way around: that the new consciousness doesn’t notice us at first.

I still think in practice that that’s a long way off; but perhaps the time to think seriously about robot rights and morality has come.

bokkenThe latest JCS features a piece by Christopher Curtis Sensei about the experience of achieving mastery in Aikido. It seems he spent fifteen years cutting bokken (an exercise with wooden swords, don’t ask me), becoming very proficient technically but never satisfying the old Sensei. Finally he despaired and stopped trying; at which point, of course, he made the required breakthrough. He needed to stop thinking about it. You do feel that his teacher could perhaps have saved him a few years if he had just said so explicitly – but of course you cannot achieve the state of not thinking about something directly and deliberately. Intending to stop thinking about a pink hippo involves thinking about a pink hippo; you have to do something else altogether.

This unreflective state of mind crops up in many places; it has something to do with the desirable state of ‘flow’ in which people are said to give their best sporting or artistic performances; it seems to me to be related to the popular notion of mindfulness, and it recalls Taoist and other mystical ideas about cleared minds and going with the stream. To me it evokes Julian Jaynes, who believed that in earlier times human consciousness manifested itself to people as divine voices; what we’re after here is getting the gods to shut up at last.

Clearly this special state of mind is a form of consciousness (we don’t pass out when we achieve it) and in fact on one level I think it is very simple. It’s just the absence of second-order consciousness, of thoughts about thoughts, in other words. Some have suggested that second-order thought is the distinctive or even the constitutive feature of human consciousness; but it seems clear to me that we can in fact do without it for extended periods.

All pretty simple then. In fact we might even be able to define it physiologically – it could be the state in which the cortex stops interfering and let’s the cerebellum and other older bits of the brain do their stuff uninterrupted – we might develop a way of temporarily zapping or inhibiting cortical activity so we can all become masters of whatever we’re doing at the flick of a switch. What’s all the fuss about?

Except that arguably none of the foregoing is actually about this special state of mind at all. What we’re talking about is unconsidered thought, and I cannot report it or even refer to it without considering it; so what have I really been discussing? Some strange ghostly proxy? Nothing? Or are these worries just obfuscatory playing with words?
There’s another mental thing we shouldn’t, logically, be able to talk about – qualia. Qualia, the ineffable subjective aspect of things, are additional to the scientific account and so have no causal powers; they cannot therefore ever have caused any of the words uttered or written about them. Is there a link here? I think so. I think qualia are pure first-order experiences; we cannot talk about them because to talk about them is to move on to second-order cognition and so to slide away from the very thing we meant to address. We could say that qualia are the experiential equivalent of the pure activity which Curtis Sensei achieved when he finally cut bokken the right way. Fifteen years and I’ll understand qualia; I just won’t be able to tell anyone about it…