Consciousness Not Needed

Artificial General Intelligence – human-style consciousness in machines – is unnecessary, says Daniel Dennett, in an interesting piece that makes several unexpected points. His thinking seems to have moved on in certain respects, though I think the underlying optimism about digital cognition is still there.

He starts out by highlighting some dangers that arise even with the non-conscious kinds of AI we have already. Recent developments make it easy to fake very realistic video of recognisable people doing or saying whatever we want. We can imagine, Dennett says ‘… a return to analog film-exposed-to-light, kept in “tamper-proof” systems until shown to juries, etc.’ Well, I think that scenario will remain imaginary. These are not completely new concerns; similar ones go back to the people who in the last century used to say ‘the camera cannot lie’ and were so often proven wrong. Actually the question of whether a piece of evidence is digital or analog is pretty well irrelevant; but its history, whether it could have been interfered with, that bit about tamper-proof containers, has always been a concern and will no doubt remain so in one form or another (I remember the special cassette recorders once used by the authorities to interview suspects, which automatically made a copy for the interviewee and could not erase or rewind the tape).

I think his concerns have a more solid foundation though, when he goes on to say that there is now some danger of people mistaking simple AI for the kind of conscious entity they can trust. People do sometimes seem willing to be convinced rather easily that a machine has a mind of its own. That tendency in itself is also not exactly new, going back to Weizenbaum’s simple chat program ELIZA (as Dennett says); but these days serious discussion is beginning about topics like robot rights and robot morality. No reason why we shouldn’t discuss them – I think we should – but the idea that they are issues of immediate practical concern seems radically premature. Still, I’m not that worried. It’s true that some people will play along with a chatbot in the Loebner contest, or pretend Siri or Alexa is a thinking being, but I think anyone who is even half-serious about it can easily tell the difference. Dennett suggests we might need licensed operators trained to be hard-nosed and unsympathetic to AIs (‘an ugly talent, reeking of racism’ – !!!), but I don’t think it’s going to be that bad.

Dennett emphasises that current AI lacks true agency and calls on the creators of new systems to be more upfront about the fact that their humanising touches are fakery and even ‘false advertising’. I have the impression that Dennett would once have considered agency, as a concept, a little fuzzy round the edges, a matter of explanatory stances and optimality rather than a clear reality whose sharp edges needed to be strongly defended. Years ago he worked with Ray Brooks on Cog, a deliberately humanoid robot they hoped would attain consciousness (it all seemed so easy back then…) and my impression is that the strategy then had a large element of ‘fake it till you make it’. But hey, I wouldn’t criticise Dennett for allowing his outlook to develop in the light of experience.

On to the two main points. Dennett says we don’t need conscious AI because there is plenty of natural human consciousness around; what we need is intelligent tools, somewhat like oracles perhaps (given the history of real oracles that might be a dubious comparison – is there a single example of an oracle that was both clear and helpful? In The Golden Ass there’s a pair of soothsayers who secretly give the same short verse to every client; in the end they’re forced to give up the profitable business, not by being rumbled, but by sheer boredom.)

I would have thought that there were jobs a conscious AI could do for us. Consciousness allows us to reflect on future and imagined contingencies, spot relevant factors from scratch, and rise above the current set of inputs. Those ought to be useful capacities in a lot of routine planning and management (I can’t help thinking they might be assets for self-driving vehicles); yes, humans could do it all, but it’s boring, detailed, and takes too long. I reckon, if there’s a job you’d give a slave, it’s probably right for a robot.

The second main point is that we ought to be wary of trusting conscious AIs because they will be invulnerable. Putting them in jail is meaningless because they live in boxes anyway; they can copy themselves and download backups, so they don’t die; unless we build in some pain function, there are really no sanctions to underpin their morality.

This is interesting because Dennett by and large assumes that future conscious AIs would be entirely digital, made of data; but the points he makes about their immortality and generally Platonic existence implicitly underline how different digital entities are from the one-off reality of human minds. I’ve mentioned this ontological difference before, and it surely provides one good reason to hesitate before assuming that consciousness can be purely digital. We’re not just data, we’re actual historical entities; what exactly that means, whether something to do with Meinongian distinctions between existence and subsistence, or something else entirely, I don’t really think anyone knows, frustrating as that is.

Finally, isn’t it a bit bleak to suggest that we can’t trust entities that aren’t subject to the death penalty, imprisonment, or other punitive sanctions? Aren’t there other grounds for morality? Call me Pollyanna, but I like to think of future conscious AIs proving irrefutably for themselves that virtue is its own reward.

Consciousness without Content

Is there such a thing as consciousness without content? If so, is that minimal, empty consciousness, in fact, the constant ground underlying all conscious states? Thomas Metzinger launched an investigation of this question in the third Carnap lecture of a year ago; there’s a summary here (link updated to correct version) in a discussion paper, and a fully worked-up paper will appear next year (hat-tip to Tom Clark for drawing this to my attention). The current paper is exploratory in several respects. One possible result of identifying the hypothetical state of Minimal Phenomenal Experience (MPE) would be to facilitate the identification of neural correlates; Metzinger suggests we might look to the Ascending Reticular Arousal System (ARAS), but offers it only as a plausible place-holder which future research might set aside.

More philosophically, the existence of an underlying conscious state which doesn’t represent anything would be a fatal blow to the view that consciousness is essentially representational in character. On that widely-held view, a mental state that doesn’t feature representation cannot in fact be conscious at all, any more than text that contains no characters is really text. The alternative is to think that consciousness is more like a screen being turned on; we see only (let’s say) a blank white expanse, but the basic state, precondition to the appearance of images, is in place, and similarly MPE can be present without ‘showing us’ anything.

There’s a danger here of getting trapped in an essentially irrelevant argument about the difference between representing nothing and not representing anything, but I think it’s legitimate to preserve representationalism (as an option at least) merely by claiming that even a blank screen necessarily represents something, namely a white void. Metzinger prefers to suggest that the MPE represents “the hidden cause of the ARAS- signal”. That seems implausible to me, as it seems to involve the unlikely idea that we all have constantly in mind a hidden thing most of us have never heard of.

Metzinger does a creditable job of considering evidence from mystic experience as well as dreamless sleep. There is considerable support here for the view that when the mind is cleared, consciousness is not lost but purified. Metzinger rightly points to some difficulties with taking this testimony on board. One is the likelihood of what he calls “theory contamination”. Most contemplatives are deeply involved with mystic or scriptural traditions that already tell them what is to be expected. Second is the problem of pinning down a phenomenal experience with no content, which automatically renders it inexpressible or ineffable. Metzinger makes it clear that this is not any kind of magic or supra-scientific ineffability, just the practical methodological issue that there isn’t, as it were, anything to be said about non-existent content. Third we have an issue Metzinger calls “performative self-contradiction”. Reports of what you get when your mind is emptied make clear that the MPE is timeless, placeless, lacking sensory character, and so on. Metzinger is a little disgusted with this; if the experience was timeless, how do you know it happened last Tuesday between noon and ten past? People keep talking about brilliance and white light, which should not be present in a featureless experience!

Here I think he under-rates the power and indeed the necessity of metaphors. To describe lack of content we fall back on a metaphor of blindness, but to be in darkness might imply simple failure of the eyes, so we tend to go for our being blinded by powerful light and the vastness of space. It’s also possible that white is a default product of our neural systems, which when deprived of input are known to produce moire patterns and blobs of light from nowhere. Here we are undoubtedly getting into the classic problems that affect introspection; you cannot have a cleared mind and at the same time be mentally examining your own phenomenal experience. Metzinger aptly likens these problems to trying to check whether the light in the fridge goes off when the door is closed (I once had one that didn’t, incidentally; it gave itself away by getting warm and unhelpfully heating food placed near it). Those are real problems that have been discussed extensively, but I don’t think they need stop the investigation. In a nutshell, William James was right to say that introspection must be retrospection; we examine our experiences afterwards. This perhaps implies that memory must persist alongside MPE, but that seems OK to me. Without expressing it in quite these terms, Metzinger reaches broadly similar conclusions.

Metzinger is mainly concerned to build a minimal model of the basic MPE, and he comes up with six proposed constraints, giving him in effect not a single MPE state but a 6-dimensional space. The constraints are as follows.

• PC1: Wakefulness: the phenomenal quality of tonic alertness.

• PC2: Low complexity of reportable content: an absence of high-level symbolic mental content (i.e., conceptual or propositional thought or mind wandering), but also of perceptual, sensorimotor, of emotional content (as in full-absorption episodes).

• PC3: Self-luminosity: a phenomenal property of MPE typically described as “radiance”, “brilliance”, or the “clear light” of primordial awareness.

• PC4: Introspective availability: we can sometimes actively direct introspective attention

to MPE and we can distinguish possible states by the degree of actually ongoing access.

• PC5: Epistemicity; as MPE is an epistemic relation (“awareness-of”,) if MPE is successfully introspected, then we would predict a distinct phenomenal character of epistemicity or subjective confidence.

• PC6: Transparency/opacity: like all other phenomenal representations, MPE can vary along a spectrum of opacity and transparency.


At first I feared this was building too much on a foundation not yet well established, but against that Metzinger could fairly ask how he could consolidate without building; what we have is acknowledged to be a sketch for now; and in fact there’s nothing that looks obviously out of place to me.

For Metzinger this investigation of minimal experience follows on from earlier explorations of minimal self-awareness and minimal perspective; this might well be the most significant of the three, however. It opens the way to some testable hypotheses and, since it addresses “pure” consciousness offers a head-on attack route on the core problem of consciousness itself. Next year’s paper is surely going to be worth a look.

Consciousness – where are we?

Interesting to see the review of progress and prospects for the science of consciousness produced by Matthias Michel and others, and particularly the survey that was conducted in parallel. The paper discusses funding and other practical issues, but we’re also given a broad view of the state of play, with the survey recording broadly optimistic views and interestingly picking out Global Workspace proposals as the most favoured theoretical approach. However, consciousness science was rated less rigorous than other fields (which I suppose is probably attributable to the interdisciplinary character of the topic and in particular the impossibility of avoiding ‘messy’ philosophical issues).

Michel suggests that the scientific study of consciousness only really got established a few decades ago, after the grip of behaviourism slackened. In practical terms you can indeed start in the mid twentieth century, but that actually overlooks the early structuralist psychologists a hundred years earlier. Wundt is usually credited as the first truly scientific psychologist, though there were others who adopted the same project around the same time. The investigation of consciousness (in the sense of awareness) was central to their work, and some of their results were of real value. Unfortunately, their introspective methods suffered a fatal loss of credibility, which is what precipitated the extreme reaction against consciousness represented by behaviourism, which eventually suffered an eclipse of its own, leaving the way clear for something like a fresh start, the point Michel takes as the real beginning. I think the longer history is worth remembering because it illustrates a pattern in which periods of energetic growth and optimism are followed by dreadful collapses, a pattern still recognisable in the field, perhaps most obviously in AI, but also in the outbreaks of enthusiasm followed by scepticism that have affected research based on fMRI scanning, for example.

In spite of the ‘winters’ affecting those areas, it is surely the advances in technology that have been responsible for the genuine progress recognised by respondents to the survey. Whatever our doubts about scanning, we undeniably know a lot more about neurology now than we did, even if that sometimes serves to reveal new mysteries, like the uncertain function of the newly-discovered ‘rosehip’ neurons. Similarly, though we don’t have conscious robots (and I think almost everyone now has a more mature sense of what a challenge that is), the project of Artificial General Intelligence has reshaped our understanding. I think, for example, that Daniel Dennett is right to argue that exploration of the wider Frame Problem in AI is not just a problem for computer scientists, but tells us about an important aspect of the human mind we had never really noticed before – its remarkable capacity for dealing with relevance and meaning, something that is to the fore in the fascinating recent development of the pragmatics of language, for example.

I was not really surprised to see the Global Workspace theory achieving top popularity in the survey (Bernard Baars perhaps missing out on a deserved hat-tip here); it’s a down-to-earth approach that makes a lot of sense and is relatively easily recruited as an ally of other theoretical insights. That said, it has been around for a while without much in the way of a breakthrough. It was not that much more surprising to see Integrated Information also doing well, though rated higher by non-professionals (Michel shrewdly suggests that they may be especially impressed by the relatively complex mathematics involved).

However, the survey only featured a very short list of contenders which respondents could vote for. The absence of illusionism and quantum theories is acknowledged; myself I would have included at least two schools of sceptical thought; computationalism/functionalism and other qualia sceptics – though it would be easy to lengthen the list. Most surprising, perhaps, is the absence of panpsychism. Whatever you think about it (and regulars will know I’m not a fan), it’s an idea whose popularity has notably grown in recent years and one whose further development is being actively pursued by capable adherents. I imagine the absence of these theories, and others such as mysterianism and the externalism doughtily championed by Riccardo Manzotti and others, is due to their being relatively hard to vindicate neurologically – though supporters might challenge that. Similarly, its robustly scientific neurological basis must account for the inclusion of ‘local recurrence’ – is that the same as recurrent processing?

It’s only fair to acknowledge the impossibility of coming up with a comprehensive taxonomy of views on consciousness which would satisfy everyone. It would be easy to give a list of twenty or more which merely generated a big argument. (Perhaps a good thing to do, then?)

Mapping the Connectome

Could Artificial Intelligence be the tool that finally allows us understand the natural kind?

We’ve talked before about the possibility; this Nautilus piece explains that scientists at the Max Planck Institute Of Neurobiology have come up with a way of using ‘neural’ networks to map, well, neural networks. There has been a continued improvement in our ability to generate images of neurons and their connections, but using those abilities to put together a complete map is a formidable task; the brain has often been described as the most complex object in the universe and drawing up a full schematic of its connections is potentially enough work for a lifetime. Yet that map may well be crucial; recently the idea of a ‘connectome’ roughly equivalent to the ‘genome’ has become a popular concept, one that suggests such a map may be an essential part of understanding the brain and consciousness. The Max Planck scientists have developed an AI, called ‘SyConn’ which tu4ns images into maps automatically with very high accuracy. In principle I suppose this means we could have a complete human connectome in the not-too-distant future.

How much good would it do us, though? It can’t be bad to have a complete map, but there are three big problems. The first is that we can already be pretty sure that connections between neurons are not the whole story. Neurons come in many different varieties, ones that pretty clearly seem to have different functions – but it’s not really clear what they are. They operate with a vast repertoire of neurotransmitters, and are themselves pretty complex entities that may have genuine computational properties all on their own. They are supported by a population of other, non-networked cells that may have a crucial role in how the overall system works. They seem to influence each other in ways that do not require direct connection; through electromagnetic fields, or even through DNA messages. Some believe that consciousness is not entirely a matter of network computation anyway, but resides in quantum or electrical fields; certainly the artificial networks that were originally inspired by biological neurology seem to behave in ways that are useful but quite distinct from those of human cognition. Benjamin Libet thought that if only he could do the experiment, he might be able to demonstrate that a sliver of the brain cut out from its neighbouring tissue but left in situ would continue to do its job. That, surely, is going too far; the brain didn’t grow all those connections with such care for nothing. The connectome may not be the full story, but it has to be important.

The second problem, though, is that we might be going at the problem from the wrong end. A map of the national road network tells us some useful things about trade, but not what it is or, in the deeper sense, how it works. Without those roads, trade would not occur in anything like the same form; blockages and poor connections may hamper or distort it, and in regions isolated by catastrophic, um, road lesions, trade may cease altogether. Of course to understand things properly we should need to know that there are different kinds of vehicle doing different jobs on those roads, that places may be connected by canals and railways as well as roads, and so on. But more fundamentally, if we start with the map, we have no idea what trade really is. It is, in fact, primarily driven and determined by what people want, need, and believe, and if we fall into the trap of thinking that it is wholly determined by the availability of trucks, goods, and roads we shall make a serious mistake.

Third, and perhaps it’s the same problem in different clothes, we still don’t have any clear and accepted idea of how neurology gives rise to consciousness anyway. We’re not anywhere near being able to look at a network and say, yup, that is (or could be) conscious, if indeed it is ever possible to reach such a conclusion.

So do we really even want a map of the connectome? Oh yes…