Aphantasia – mental blindsight?

phantasyPeople who cannot form mental images? ‘Aphantasia’ is an extraordinary new discovery; Carl Zimmer and Adam Zeman seem between them to have uncovered a fascinating and previously unknown mental deficit (although there is a suggestion that Galton and others may have been aware of it earlier).

What is this aphantasia? In essence, no pictures in the head. Aphantasics cannot ‘see’ mental images of things that are not actually present in front of their eyes. Once the possibility received publicity Zimmer and Zeman began to hear from a stream of people who believe they have this condition. It seems people manage quite well with it and few had ever noticed anything wrong – there’s an interesting cri de coeur from one such sufferer here. Such people assume that talk of mental images is metaphorical or figurative and that others, like them, really only deal in colourless facts. It was the discovery of a man who had lost the visualising ability through injury that first brought it to notice: a minority of people who read about his problem thought it was more remarkable that he had ever been able to form mental images than that he now could not.

Some caution is surely in order. When a new disease or disability comes along there are usually people who sincerely convince themselves that they are sufferers without really having the condition. Some might be mistaken. Moreover, the phenomenology of vision has never been adequately clarified, and I strongly suspect it is more complex than we realise. There are, I think, several different senses in which you can form a mental image; those images may vary in how visually explicit they are, and it could well be that not all aphantasics are suffering the same deficits.

However that may be, it seems truly remarkable that such a significant problem could have passed unnoticed for so long. Spatial visualisation is hardly a recondite capacity; it is often subject to testing. One kind of widely used test presents the subject with a drawing of a 3D shape and a selection of others that resemble it. One is a perfect rotated copy of the original shape, and subjects are asked to pick it out. There is very good evidence that people solve these problems by mentally rotating an image of the target shape; shapes rotated 180 degrees regularly take twice as long to spot as ones that have been rotated 90; moreover the speed of mental rotation appears to be surprisingly constant between subjects. How do aphantasics cope with these tests at all? One would think that the presence of a significantly handicapped minority would have become unmissably evident by now.

One extraordinary possibility, I think, is that aphantasia is in reality a kind of mental blindsight. Subjects with blindsight are genuinely unable to see things consciously, but respond to visual tasks with a success rate far better than chance. It seems that while they can’t see consciously, by some other route their unconscious mind still can. It seems tantalisingly possible to me that aphantasics have an equivalent problem with mental images; they do form mental images but are never aware of them. Some might feel that suggestion is nonsensical; doesn’t the very idea of a mental image imply its presence in consciousness? Well, perhaps not: perhaps our subconscious has a much more developed phenomenal life than we have so far realised?

At any rate, expect to hear much more about this…

Time slices and streams

Smooth or chunky? Like peanut butter, experience could have different granularities; in practice it seems the answer might be ‘both’. Herzog, Kammer and Scharnowski here propose a novel two-level model in which initial processing is done on a regular stream of fine-grained percepts. Here things get ‘labelled’ with initial colours, durations, and so on, but relatively little of this processing ever becomes conscious. Instead the results lurch into conscious awareness in irregular chunks of up to 400 milliseconds in duration. The result is nevertheless an apparently smooth and seamless flow of experience – the processing edits everything into coherence.

Why adopt such a complex model? What’s wrong with just supposing that percepts roll straight from the senses into the mind, in a continuous sequence? That is after all how things look. The two-level system is designed to resolve a conflict between two clear findings. On the one hand we do have quite fine-grained perception; we can certainly be aware of things that are much shorter than 400ms in duration. On the other, certain interesting effects very strongly suggest that some experiences only enter consciousness after 400ms.

If for example, we display a red circle and then a green one a short distance away, with a delay of 400ms, we do not experience two separate circles, but one that moves and changes colour. In the middle of the move the colour suddenly switches between red and green (see the animation – does that work for you?). But our brain could not have known the colour of the second circle until after it appeared, and so it could not have known half-way through that the circle needed to change. The experience can only have been fed to consciousness after the 400ms was up.

A comparable result is obtained with the intermittent presentation of verniers. These are pairs of lines offset laterally to the right or left. If two different verniers are rapidly alternated, we don’t see both, but a combined version in which the offset is the average of those in the two separate verniers. This effect persists for alternations up to 400ms. Again, since the brain cannot know the second offset until it has appeared, it cannot know what average version to present half-way through; ergo, the experience only becomes conscious after a delay of 400ms.

It seems that even verbal experience works the same way, with a word at the end of a sentence able to smoothly condition our understanding of an ambiguous word (‘mouse’ – rodent or computer peripheral?) if the delay is within 400ms; and there are other examples.

Curiously, the authors make no reference to the famous finding of Libet that our awareness of a decision occurs up to 500ms after it is really made. Libet’s research was about internal perception rather than percepts of external reality, but the similarity of the delay seems striking and surely strengthens the case for the two-level model; it also helps to suggest that we are dealing with an effect which arises from the construction of consciousness, not from the sensory organs or very early processes in the retina or elsewhere.

In general I think the case for a two-level process of some kind is clear and strong, and well set out here. We may reasonably be a little more doubtful about the details of the suggested labelling process; at one point the authors refer to percepts being assigned ‘numbers’; hang on to those quote marks would be my advice.

The authors are quite open about their uncertainty around consciousness itself. They think that the products of initial processing may enter consciousness when they arrive at attractor states, but the details of why and how are not really clear; nor is it clear whether we should think of the products being passed to consciousness (or relabelled as conscious?) when they hit attractor states or becoming conscious simply by virtue of being in an attractor state. We might go so far as to suppose that the second level, consciousness, has no actual location or consistent physical equivalent, merely being the sum of all resolved perceptual states in the brain at any one time.

That points to the wider issue of the Binding Problem, which the paper implicitly raises but does not quite tackle head on. The brain gets fed a very variable set of sensory inputs and manages to craft a beautifully smooth experience out of them (mostly); it looks as if an important part of this must be taking place in the first level processing, but it is a non-trivial task which goes a long way beyond interpolating colours and positions.

The authors do mention the Abhidharma Buddhist view of experience as a series of discrete moments within a flow; we’ve touched on this before in discussions of findings by Varea and others that the flow of consciousness seems to have a regular pulse; it would be intriguing and satisfactory if that pulse could be related to the first level of processing hypothesised here; we’re apparently talking about something in the 100ms range which seems a little on the long side for the time slices proposed; but perhaps a kind of synthesis is possible..?

Evolving the Dark Tetrad.

MachiavelliWhy are we evil? This short piece  asks how the “Dark Tetrad” of behaviours could have evolved.

The Dark Tetrad is an extended version of the Dark Triad of three negative personality traits/behaviours (test yourself here  – I scored ‘infrequently vile’). The original three are ‘Machiavellianism’ – selfishly deceptive, manipulative behaviour; Psychopathy – indifference or failure to perceive the feelings of others; and Narcissism – vain self-obsession. Clearly there’s some overlap and it may not seem clear that these are anything but minor variants on selfishness, but research does suggest that they are distinct. Machiavellians, for example do not over-rate themselves and don’t need to be admired; narcissists aren’t necessarily liars or deceivers; psychopaths are manipulative but don’t really get people.

These three traits account for a good deal of bad behaviour, but it has been suggested that they don’t explain everything; we also need a fourth kind of behaviour, and the leading candidate is ‘Everyday Sadism‘ ; simple pleasure in the suffering of others, regardless of whether it brings any other advantage for oneself. Whether this is ultimately the correct analysis of ‘evil’ behaviour or not, all four types are readily observable in varying degrees. Socially they are all negative, so how could they have evolved?

There doesn’t seem to me to be much mystery about why ‘Machiavellian’ behaviour would evolve (I should acknowledge at this point that using Machiavelli as a synonym for manipulativeness actually understates the subtlety and complexity of his philosophy). Deceiving others in one’s own interests has obvious advantages which are only negated if one is caught. Most of us practice some mild cunning now and then, and the same sort of behaviour is observable in animals, notably our cousins the chimps.

Psychopathy is a little more surprising. Understanding other people, often referred to as ‘theory of mind’ is a key human achievement, though it seems to be shared by some other animals to a degree. However, psychopaths are not left puzzled by their fellow human beings; it’s more that they lack empathy and see others as simply machines whose buttons can freely be pushed. This can be a successful attitude and we are told that somewhat psychopathic traits are commonly found in the successful leaders of large organisations. That raises the question of why we aren’t all psychopaths; my guess is that psycopathic behaviour pays off best in a society where most people are normal; if the proportion grows above a certain small level, the damage done by competition between psychopaths starts to outweigh the benefits and the numbers adjust.

Narcissism is puzzling because narcissists are less self-sufficient than the rest of us and also have deluded ideas about what they can accomplish; neither of these are positive traits in evolutionary terms. One positive side is that narcissists expect a lot from themselves and in the right circumstances they will work hard and behave well in order to protect their own self-image. It may be that in the right context these tendencies win esteem and occasional conspicuous success, and that this offsets the disadvantages.

Finally, sadism. It’s hard to see what benefits accrue to anyone from simply causing pain, detached from any material advantage. Sadism clearly requires theory of mind – if you didn’t realise other people were suffering, there would be no point in hurting them. It’s difficult to know whether there are genuine animal examples. Cats seem to torture mice they have caught, letting them go and instantly catching them again, but to me the behaviour seems automatic or curious, not motivated by any idea that the mice experience pain. Similarly in other cases it generally seems possible to find an alternative motivation.

What evolutionary advantage could sadism confer? Perhaps it makes you more frightening to rivals – but it may also make and motivate enemies. I think in this case we must assume that rather than being a trait with some downsides but some compensating value it is a negative feature that just comes along unavoidably with a large free-running brain. The benefit of consciousness is that it takes us out of the matrix of instinctive and inherited patterns of behaviour and allows detached thought and completely novel responses. In a way Nature took a gamble with consciousness, like a good manager recognising that the good staff might do better if left without specific instructions. On the whole, the bet has paid off handsomely, but it means that the chance of strange and unfavourable behaviour in some cases or on some occasions just has to be accepted. I the case of everyday sadism, the sophisticated theory of mind which human beings have is put to distorted and unhelpful use.

Maybe then, sadism is the most uniquely human kind of evil?

Zappiens unreads

attentionAre our minds being dumbed by digits – or set free by unreading?

Frank Furedi notes  that it has become common to deplore a growing tendency to inattention. In fact, he says, this kind of complaint goes back to the eighteenth century. Very early on the failure to concentrate was treated as a moral failing rather than simple inability; Furedi links this with the idea that attention to proper authority is regarded as a duty, so that inattention amounts to disobedience or disrespect. What has changed more recently, he suggests, is that while inattention was originally regarded as an exceptional problem, it is now seen as our normal state, inevitable: an attitude that can lead to fatalism.

The advent of digital technology has surely affected our view. Since the turn of the century or earlier there have been warnings that constant use of computers, and especially of the Internet, would change the way our brains worked; would damage us intellectually if not morally. Various kinds of damage have been foreseen; shortened attention span, lack of memory, dependence on images, lack of concentration, failure of analytical skills and inability to pull the torrent of snippets into meaningful structures. ‘Digital natives’ might be fluent in social media and habituated to their own strange new world, but there was a price to pay. The emergence of Homo Zappiens has been presented as cause for concern, not celebration.

Equally there have been those who suggest that the warnings are overstated. It would, they say, actually be strange and somewhat disappointing if study habits remained exactly the same after the advent of an instant, universal reference tool; the brain would not be the highly plastic entity we know it to be if it didn’t change its behaviour when presented with the deep interactivity that computers offer; and really it’s time we stopped being surprised that changes in the behaviour of the mind show up as detectable physical changes in the brain.

In many respects, moreover, people are still the same, aren’t they? Nothing much has changed. More undergraduates than ever cope with what is still a pretty traditional education. Young people have not started to find the literature of the dark ages before the 1980s incomprehensible, have they? We may feel at times that contemporary films are dumbed down, but don’t we remember some outstandingly witless stuff from the 1970s and earlier? Furedi seems to doubt that all is well; in fact, he says, undergraduate courses are changing, and are under pressure to change more to accommodate the flighty habits of modern youth who apparently cannot be expected to read whole books. Academics are being urged to pre-digest their courses into sets of easy snippets.

Moreover, a very respectable recent survey of research found that some of the alleged negative effects are well evidenced.

 Growing up with Internet technologies, “Digital Natives” gravitate toward “shallow” information processing behaviors characterized by rapid attention shifting and reduced deliberations. They engage in increased multitasking behaviors that are linked to increased distractibility and poor executive control abilities. Digital natives also exhibit higher prevalence of Internet-related addictive behaviors that reflect altered reward-processing and self-control mechanisms.

So what are we to make of it all? Myself, I take the long view; not just looking back to the early 1700s but also glancing back several thousand years. The human brain has reshaped its modus operandi several times through the arrival of symbols and languages, but the most notable revolution  was surely the advent of reading. Our brains have not had time to evolve special capacities for the fairly recent skill of reading, yet it has become almost universal, regarded as a natural accomplishment almost as natural as walking. It is taken for granted in modern cities – which increasingly is where we all live – that everyone can read. Surely this achievement required a corresponding change in our ability to concentrate?

We are by nature inattentive animals; like all primates we cannot rest easy – as a well-fed lion would do – but have to keep looking for new stimuli to feed our oversized brains. Learning to read, though, and truly absorbing a text, requires steady focus on an essentially linear development of ideas. Now some will point out that even with a large tome, we can skip; inattentive modern students may highlight only the odd significant passage for re-reading as though Plato need really only have written fifty sentences; some courteously self-effacing modern authors tell you which chapters of their work you can ignore if you’re already expert on A, or don’t like formulae, or are only really interested in B. True; but to me those are just the exceptions that highlight the existence of the rule that proper  books require concentration.

No wonder then, that inattention first started to be seriously stigmatised in the eighteenth century, just when we were beginning to get serious about literacy; the same period when a whole new population of literate women became the readership that made the modern novel viable.

Might it not be that what is happening now is that new technology is simply returning us to our natural fidgety state, freed from the discipline of the long, fixed text? Now we can find whatever nugget of information we want without trawling through thousands of words; we can follow eccentric tracks through the intellectual realm like an excitable dog looking for rabbits. This may have its downside, but it has some promising upsides too: we save a lot of time, and we stand a far better chance of pulling together echoes and correspondences from unconnected matters, a kind of synergy which may sometimes be highly productive. Even those old lengthy tomes are now far more easily accessible than they ever were before. The truth is, we hardly know yet where instant unlimited access and high levels of interactivity will take us; but perhaps unreading, shedding some old habits, will be more a liberation than a limitation.

But now I have hit a thousand words, so I’d better shut up.