Ways to be less conscious

multi-factor-awareness-cIs awareness an on/off thing? Or is it on a sort of dimmer switch? A degree of variation seems to be indicated by the peculiar vagueness of some metal content (or is that just me?). Fazekas and Overgaard argue that even a dimmer switch is far too simple; they suggest that there are at least four ways in which awareness can be graduated.

First, interestingly, we can be aware of things to different degrees on different levels. Our visual system identifies dark and light, then at a higher level identifies edges, at a higher level again sees three dimensional shapes, and higher again, particular objects. We don’t have to be equally clear at all levels, though. If we’re looking at the dog, for example, we may be aware that in the background is the cat, and a chair; but we are not distinctly aware of the cat’s whiskers or the pattern on the back of the chair. We have only a high-level awareness of cat and chair. It can work the other way, too; people who suffer from face-blindness, for example, may be well able to describe the nose and other features of someone presented to them without recognising that the face belongs to a friend.

That is certainly not the only way our awareness can vary, though; it can also be of higher or lower quality. That gives us a nice two-dimensional array of possible vagueness; job done? Well no, because Fazekas and Overgaard think quality varies in at least three ways.

  • Intensity
  • Precision
  • Temporal Stability

So in fact we have a matrix of vagueness which has four dimensions, or perhaps I should say three plus one.

The authors are scrupulous about explaining how intensity, precision, and temporal stability probably relate to neuronal activity, and they are quite convincing; if anything I should have liked a bit more discussion of the phenomenal aspects – what are some of these states of partially-degraded awareness actually like?

What they do discuss is what mechanism governs or produces their three quality factors. Intensity, they think, comes from the allocation of attention. When we really focus on something, more neuronal resources are pulled in and the result is in effect to turn up the volume of the experience a bit.

Precision is also connected with attention; paying attention to a feature produces sharpening and our awareness becomes less generic (so instead of merely seeing something is red, we can tell whether it is magenta or crimson, and so on). This is fair enough, but it does raise a small worry as to whether intensity and precision are really all that distinct. Mightn’t it just be that enhancing the intensity of a particular aspect of our awareness makes it more distinct and so increases precision? The authors acknowledge some linkage.

Temporal stability is another matter. Remember we’re not talking here about whether the stimulus itself is brief or sustained but whether our awareness is constant. This kind of stability, the authors say, is a feature of conscious experience rather than unconscious responses and depends on recurrence and feedback loops.

Is there a distinct mechanism underlying our different levels of awareness? The authors think not; they reckon that it is simply a matter of what quality of awareness we have on each level (I suppose we have to allow for the possibility if not the certainty that some levels will be not just poor quality but actually absent. I don’t think I’m typically aware of all possible levels of interpretation when considering something.

So there is the model in all its glory; but beware; there are folks around who argue that in fact awareness is actually not like this, but in at least some cases is an on/off matter. Some experiments by Asplund were based on the fact that if a subject is presented with two stimuli in quick succession, the second is missed. As the gap increases, we reach a point where the second stimulus can be reported; but subjects don’t see it gradually emerging as the interval grows; rather With one gap it’s not there, while With a slightly larger one, it is.

Fazekas and Overgaard argue that Asplund failed to appreciate the full complexity of the graduation that goes on; his case focuses too narrowly on precision alone. In that respect there may be a sharp change, but in terms of intensity or temporal stability, and hence in terms of awareness overall, they think there would be graduation.

A second rival theory which the authors call the ‘levels of processing view’ or LPV, suggests that while awareness of low-level items is graduated, at a high level you’re either aware or not. These experiments used colours to represent low-level features and numbers for high level ones, and found that while there was graduated awareness of the former, with the latter you either got the number or you didn’t.

Fazekas and Overgaard argue that this is because colours and numbers are not really suitable for this kind of comparison. Red and blue are on a continuous scale and one can morph gradually into the other; the numeral ‘7’ does not gradually change into ‘9’. This line of argument made sense to me, but on reflection caused me some problems. If it is true that numerals are just distinct in this way, that seems to me to concede a point that makes the LPV case seem intuitively appealing; in some circumstances things just are on/off. It seemed true at first sight that numerals are distinct in this way, but when I thought back to experiences in the optician’s chair, I could remember cases where the letter I was trying to read seemed genuinely indeterminate between two or even more alternatives. Now though, if My first thought was wrong and numerals are not in fact inherently distinct in this way, that seems to undercut Fazekas and Overgaard’s counter-argument.

On the whole the more complex model seems to provide better explanatory resources and I find it broadly convincing, but I wonder whether a reasonable compromise couldn’t be devised, allowing that for certain experiences there may be a relatively sharp change of awareness, with only marginal graduation? Probably I have come up with a vague and poorly graduated conclusion…

Consciousness: the underlying problem


Watch the full video with related content here.

What is the problem about consciousness? A Royal Institution video with interesting presentations (part 2 another time).

Anil Seth presents a striking illusion and gives an optimistic view of the ability of science to tackle the problem; or maybe we just get on with the science anyway? The philosophers may ask good questions, but their answers have always been wrong.

Barry Smith says that’s because when the philosophers have sorted a subject out it moves over into science. One problem is that we tend to miss thinking about consciousness and think about its contents. Isn’t there a problem: to be aware of your own awareness changes it? I feel pain in my body, but could consciousness be in my ankle?

Chris Frith points out that actually only a small part of our mental activity has anything to do with consciousness, and in fact there is evidence to show that many of the things we think are controlled by conscious thought really are not: a vindication of Helmholtz’s idea of unconscious inference. Thinking about your thinking messes things up?  Try asking someone how happy they feel – guaranteed to make them less happy immediately…

God Helmet

god-helmetIs God a neuronal delusion? Dr Michael Persinger’s God Helmet might suggest so. This nice Atlas Obscurapiece’ by Linda Rodriguez McRobbie finds a range of views.

The helmet is far from new, partly inspired by Persinger’s observations back in the 1980s of a woman whose seizure induced a strong sense of the presence of God. In the 90s, Persinger decided to see whether he could stimulate creativity by applying very mild magnetic fields to the right hemisphere of the brain. Among other efforts he tried reproducing the kind of pattern of activity he had seen in the earlier seizure and, remarkably, succeeded in inducing a sense of the presence of God in some of his subjects. Over the years he has continued to repeat this exercise and others like it; the helmet doesn’t always induce a sense of God; sometime people fall asleep, sometimes (like Richard Dawkins) they get very little out of the experience. Sometimes they have a vivid sense of some presence, but don’t identify it as God.

Could this, though, be the origin of theism? Do all religious experiences stem from aberrant neuronal activity? It seems pretty unlikely to me. Quite apart from the severely reductive nature of the hypothesis, it doesn’t seem to offer a broad enough account. People arrive at a belief in God by various routes, and many of them do not rely on any sense of immediate presence. For people brought up in religious families, and for pretty much everyone in say, medieval Europe, God is or was just a fact, a natural part of the world. Some people come to belief along a purely rational path, or by one that seeks out meaning for life and the world. Such people may not require any sense of a personal presence of the divine; or they may earnestly desire it without attaining it – without thereby losing their belief. Even those whose belief stems from a sense of mystical contact with God do not necessarily or I think even typically experience it as a personal presence somewhere in the room; they might feel that instead they have ascemded to a higher sphere, or experience a kind of communion which has nothing to do with location.

Equally, or course, that presence in the room may not seem like God. It might be more like the thing under the bed, or like the malign person who just might be in the shadows behind you on the dark lonely path. People who wake from sleep without immediately regaining motor control of their body may have the sense of someone sitting or pressing on their chest; a ‘hag’ or other horrid being, not God. Perhaps Persinger is lucky that his experiments do not routinely induce horror. Persinger believes that the way it works is that by stimulating one hemisphere of the brain, you make the other hemisphere aware of it and this other presence is translated into an external person of some misty kind. I doubt it is quite like that. I do suspect that a sense of ‘someone there’ may be one of the easiest feelings to induce because the brain is predisposed towards it, just as we are predisposed towards seeing faces in random graphic noise. The evolutionary advantages of a mental system which is always ready to shout ‘look behind you!’ are surely fairly easy to see in a world where our ancestors always needed to consider the possibility that an enemy or a predator might indeed be lurking nearby. The attention wasted on a hundred false alarms is easily outbalanced by the life-saving potential of one justified one.

So what is going on? Perhaps not that much after all. Reproducing Persinger’s results has proved difficult in most cases and it seems plausible that the effect actually depends as much on suggestion as anything else. Put people into a special environment, put a mild magnetic buzz into their scalp, and some of them will report interesting experiences, especially if they have been primed in advance to expect something of the kind. It is perfectly reasonable to think that electrical interference might affect the brain, but Persinger’s magnets really are pretty mild and it sees rather unlikely that the fields in question are really strong enough to affect the firing of neurons through the skull and into the rather resistant watery mass of the brain. In addition I would have to say that the whole enterprise has a curiously dated air about it; the faith in a rather simple idea of hemispheric specialisation, the optimistic conviction that controlling the brain is going to turn out a pretty simple business, and perhaps even the love of a good trippy experience, all seem strongly rooted in late twentieth century thinking.

Perhaps in the end the God Helmet is really another sign of an issue which has become more and more evident lately. The strong suggestibility of the human mind means that sometimes even in neurological science, we are in danger of getting the interesting results we really wanted all along, however misleading or ill-founded they may really be?

Interesting Stuff

correspondentA nice podcast about the late great Hilary Putnam. Meaning ain’t in the head!

Joe Kern has conducted a careful investigation of identity and personhood, paring away the contingent properties and coming up with a form of materialist reincarnation which is a more serious cousin of my daughter’s metempsychotic solipsism.

A less attractive conclusion (I may be biased) but with thanks to Jesus Olmo, here’s an examination of  whether the Internet is a superorganism of a kind that may attain consciousness. I remember when they used to say the brain was like a telephone exchange (my daughter would probably think that was a kind of second-hand market for mobes).

Sub-ethics for machines?

dagstuhl-ceIs there an intermediate ethical domain, suitable for machines?

The thought is prompted by this summary of an interesting seminar on Engineering Moral Agents, one of the ongoing series hosted at Schloss Dagstuhl. It seems to have been an exceptionally good session which got into some of the issues in a really useful way – practically oriented but not philosophical naive. It noted the growing need to make autonomous robots – self-driving cars, drones, and so on – able to deal with ethical issues. On the one hand it looked at how ethical theories could be formalised in a way that would lend itself to machine implementation, and on the other how such a formalisation could in fact be implemented. It identified two broad approaches: top-down, where in essence you hard-wire suitable rules into the machine, and bottom-up, where the machine learns for itself from suitable examples. The approaches are not necessarily exclusive, of course.

The seminar thought that utilitarian or Kantian theories of morality were both prima facie candidates for formalisation. Utilitarian or more broadly, consequentialist theories look particularly promising because calculating the optimal value (such as the greatest happiness of the greatest number) achievable from the range of alternatives on offer looks like something that can be reduced to arithmetic fairly straightforwardly. There are problems in that consequentialist theories usually yield at least some results that look questionable in common sense terms (finding the initial values to slot into your sums is also a non-trivial challenge – how do you put a clear numerical value on people’s probable future happiness?)

A learning system eases several of these problems. You don’t need a fully formalised system (so long as you can agree on a database of examples). But you face the same problems that arise for learning systems in other contexts; you can’t have the assurance of knowing why the machine behaves as it does, and if your database had unnoticed gaps or bias you may suffer from sudden catastrophic mistakes.  The seminar summary rightly notes that a machine that learned its ethics will not be able to explain its behaviour; but I don’t know that that means it lacks agency; many humans would struggle to explain their moral decisions in a way that would pass muster philosophically. Most of us could do no more than point to harms avoided or social rules observed at best.

The seminar looked at some interesting approaches, mentioned here with tantalising brevity: Horty’s default logic, Sergot’s STIT (See To It That) logic; and the possibility of drawing on the decision theory already developed in the context of micro-economics. This is consequentialist in character and there was an examination of whether in fact all ethical theories can be restated in consequentialist terms (yes, apparently, but only if you’re prepared to stretch the idea of a consequence to a point where the idea becomes vacuous). ‘Reason-based’ formalisations presented by List and Dietrich interestingly get away from narrow consequentialisms and their problems using a rightness function which can accommodate various factors.

The seminar noted that society will demand high, perhaps precautionary standards of safety from machines, and floated the idea of an ethical ‘black box’ recorder. It noted the problem of cultural neutrality and the risk of malicious hacking. It made the important point that human beings do not enjoy complete ethical agreement anyway, but argue vigorously about real issues.

The thing that struck me was how far it was possible to go in discussing morality when it is pretty clear that the self-driving cars and so on under discussion actually have no moral agency whatever. Some words of caution are in order here. Some people think moral agency is a delusion anyway; some maintain that on the contrary, relatively simple machines can have it. But I think for the sake of argument we can assume that humans are moral beings, and that none of the machines we’re currently discussing is even a candidate for moral agency – though future machines with human-style general understanding may be.

The thing is that successful robots currently deal with limited domains. A self-driving car can cope with an array of entities like road, speed, obstacle, and so on; it does not and could not have the unfettered real-world understanding of all the concepts it would need to make general ethical decisions about, for example, what risks and sacrifices might be right when it comes to actual human lives. Even Asimov’s apparently simple Laws of Robotics required robots to understand and recognise correctly and appropriately the difficult concept of ‘harm’ to a human being.

One way of squaring this circle might be to say that, yes, actually, any robot which is expected to operate with any degree of autonomy must be given a human-level understanding of the world. As I’ve noted before, this might actually be one of the stronger arguments for developing human-style artificial general intelligence in the first place.

But it seems wasteful to bestow consciousness on a roomba, both in terms of pure expense and in terms of the chronic boredom the poor thing would endure (is it theoretically possible to have consciousness without the capacity for boredom?). So really the problem that faces us is one of making simple robots, that operate on restricted domains, able to deal adequately with occasional issues from the unrestricted domain of reality. Now clearly ‘adequate’ is an important word there. I believe that in order to make robots that operate acceptably in domains they cannot understand, we’re going to need systems that are conservative and tend towards inaction. We would not, I think, accept a long trail of offensive and dangerous behaviour in exchange for a rare life-saving intervention. This suggests rules rather than learning; a set of rules that allow a moron to behave acceptably without understanding what is going on.

Do these rules constitute a separate ethical realm, a ‘sub-ethics’ that substitute for morality when dealing with entities that have autonomy but no agency? I rather think they might.

Explaining the Inexplicable

Here’s another IAI video on Explaining the Inexplicable: it honestly doesn’t have much to do with consciousness but today, for reasons I can’t quite put my finger on, it felt appropriate…
Watch more videos on iai.tv

Nancy Cartwright says there are those who like the “Big Explainers”; theories that offer to explain everything: then there are those who cherish mystery: she situates herself in the middle somewhere – the ‘Missouri’ position. We know that some things can be explained, so let’s see what you got.

Piers Corbyn thinks the incomplete Enlightenment project has been undone by a fondness for grand theories and models (not least over climate change). We need to get back on track, and making scientific falsehood illegal would help.

James Ladyman thinks modern physics has removed the certainty that everything can be explained. Nevertheless, science has succeeded through its refusal to accept that any domain is in principle inexplicable. We should carry on and instead of trying for grand total explanations we should learn to live with partial success.

I don’t know much about this, but I reckon an explanation is an account that, when understood, stops a certain kind of worry. We may notice that most explanations reduce or simplify the buzzing complexity of the world; once we have the explanation we need only worry about a few general laws instead of millions of particles; or perhaps we know we need only worry about a simpler domain to which the original can be reduced. In short, the desire for explanation is akin to the desire for tidiness, or let’s politely call it elegance.

Don’t Sweat the Hard Problem

Set the Hard Problem aside and tackle the real problem instead, says Anil K Seth in a thought-provoking phenomenological-investigationpiece in Aeon. I say thought-provoking; overall I like the cut of his jib and his direction of travel: most of what he says seems right. But somehow his piece kept stimulating the  cognitive faculty in my brain that generates quibbles.

He starts, for example, by saying that in philosophy a Cartesian debate over mind-stuff and matter-stuff continues to rage. Well, that discussion doesn’t look particularly lively or central to me. There are still people around who would identify as dualists in some sense, no doubt, but by and large my perception is that we’ve moved on. It’s not so much that monism won, more that that entire framing of the issue was left behind. ‘Dualist’, it seems to me is now mainly a disparaging term applied to other people, and whether he means it or not Seth’s remark comes across as having a tinge of that.

Indeed, he proceeds to say that David Chalmers’ hard/easy problem distinction is inherited from Descartes. I think he should show his working on that. The Hard Problem does have dualist answers, but it has non-dualist ones too. It claims there are things not accounted for by physics, but even monists accept that much. Even Seth himself surely doesn’t think that people who offer non-physics accounts of narrative or society must therefore be dualists?

Anyway, quibbling aside for now, he says we’ll get on better if we stop worrying about why consciousness exists at all and try instead to relate its features to the underlying biological processes. That is perfectly sensible. It is depressingly possible that the Hard Problem will survive every advance in understanding, even beyond the hypothetical future point when we have a comprehensive account of how the mind works. After all, we’re required to find it conceivable that my zombie twin could be exactly like me without having real subjective experience, so it must be possible that we could come to understand his mind totally without having any grasp on my qualia.

How shall we set about things, then? Seth proposes distinguishing between level of consciousness, contents, and self. That feels an uncomfortable list to me; this is uncharacteristically tidy-minded, but I like all members of a list to be exclusive and similar; whereas as Seth confirms, self here is to be seen as part of the contents. To me, it’s a bit as if he suggested that in order to analyse a performance of a symphony we should think about loudness, the work being performed, and the tune being played. That analogy points to another issue; ‘loudness’ is a legitimate quality of orchestral music but we need to remember that different instruments may play at different volumes and that the music can differ in quality in lots of other important ways. Equally, the level of consciousness is not really as simple as ten places on a dial.

Ah, but Seth recognises that. He distinguishes between consciousness and wakefulness. For consciousness it’s not the number of neurons involved or their level of activity that matters. It turns out to be complexity: findings by Massimini show that pulses sent into a brain in dreamless sleep produce simple echoes; sent into a conscious brain (whose overall level of activity may not be much greater) they produce complex reflected and transformed patterns. Seth hopes that this can be the basis of a ‘conscious meter’, the value of which for certain comatose patients is readily apparent. He is pretty optimistic generally about how much light this might shed on consciousness, rather as thermometers transformed…

“our physical understanding of heat (as average molecular kinetic energy)”

(Unexpectedly, a physics quibble; isn’t that temperature? Heat is transferred energy, isn’t it?)

Of course a complex noise is not necessarily a symphony and complex brain activity need not be conscious; Seth thinks it needs to be informative (whatever that may mean) and integrated. This of course links with Tononi’s Integrated Information Theory, but Seth sensibly declines to go all the way with that; to say that consciousness just is integrated information seems to him to be going too far; yielding again to the desire for deep, final yet simple answers, a search which just leads to trouble.

Instead he proposes, drawing on the ideas of Helmholtz, that we see the brain as a prediction machine. He draws attention to the importance of top-down influences on perception; that is, instead of building up a picture from the elements supplied by the senses, the brain often guesses what it is about to see and hear, and presents us with that unless contradicted by the senses – sometimes even if it is contradicted by the senses. This is hardly new (obviously not if it comes from Helmholtz (1821-1894)), but it does seem that Seth’s pursuit of the ‘real problem’ is yielding some decent research.

Finally Seth goes on to talk a little about the self. Here he distinguishes between bodily, perspectival, volitional, narrative and social selves. I feel more comfortable with this list than the other one – except that these are are all deemed to be merely experienced. You can argue that volition is merely an impression we have; that we just think certain things are under our conscious control – but you have to argue for it. Just including that implicitly in your categorisation looks a bit question-begging.

Ah, but Seth does go on to present at least a small amount of evidence. He talks first about a variant of the rubber hand experiment, in which said item is made to ‘feel’ like your own hand: it seems that making a virtual hand flash in time with the subject’s pulse enhances the impression of ownership (compared with a hand that flashes out of synch), which is indeed interesting. And he mentions that the impression of agency we have is reinforced when our predictions about the result are borne out. That may be so, but the fact that our impression of agency can be influenced by other factors doesn’t mean our agency is merely an impression – any more than a delusion about a flashing hand proves we don’t really have a hand.

But honestly, quibbles aside this is sensible stuff. Maybe I should give all that Hard Problem stuff a rest…

Male and female brains

A debate from IAI about male and female minds. It is pretty much agreed between the speakers that men’s brains and women’s brains are not really different; the claimed physical differences all come down to size, women being smaller on average. Behavioural and psychological differences exist, but only statistically; if you plot individuals along a line, there is far more overlap than difference. All of that is ably set out by Gina Rippon. Simon Baron-Cohen agrees but wants to reserve some space for the influence of biology, which affects such matters as the incidence of autism. Helena Cronin puts it all down to evolution; you’ve got two strategies, competing for mates or nurturing your offspring; males tend to the first, women to the second, and many evolved differences flow from  that, although human sexes are less distinct than those in some mammal species.

Perhaps the crux of the debate comes when Cronin denies the existence of the ‘glass ceiling’; fewer women get to the board room, she says, because fewer women choose that path. Rippon responds that there is still evidence that applicants with male names are treated more favourably.
At any rate, it seems that if we thought men had a thicker corpus callosum, or differed in brain structure in other ways, we were just wrong.
If you’re thirsting for more controversy on gender, you might want to look at Phillipe Van Parijs’ paper on several apparent disadvantages to being male (via Crooked Timber.)