Seating Consciousness

This short piece by Tam Hunt in Nautilus asks whether the brain’s electromagnetic fields could be the seat of consciousness.

What does that even mean? Let’s start with a sensible answer. It could just mean that electromagnetic effects are an essential part of the way the brain works. A few ideas along these lines are discussed in the piece, and it’s a perfectly respectable hypothesis. But it’s hard to see why that would mean the electromagnetic aspects of brain processes are the seat of consciousness any more than the chemical or physical aspects. In fact the whole idea of separating electromagnetic effects from the physical events they’re associated with seems slightly weird to me; you can’t really have one without the other, can you?

A much more problematic reading might be that the electromagnetic fields are where consciousness is actually located. I believe this would be a kind of category error. Consciousness in itself (as opposed to the processes that support and presumably generate it) does not have a location. It’s like a piece of arithmetic or a narrative; things that don’t have the property of a physical location.

It looks as if Hunt is really thinking in terms of the search, often pursued over the years, for the neural correlates of consciousness. The idea of electromagnetic fields being the seat of consciousness essentially says, stop looking at the neurons and try looking at the fields instead.

That’s fine, except that for me there’s a bit of a problem with the ‘correlates of consciousness’ strategy anyway; I doubt whether there is, in the final analysis, any systematic correlation (though things may not be quite as bad that makes them sound).

By way of explanation I offer an analogy; the search for the textual correlates of story. We have reams of text available for research, and we know that some of this text has the property of telling one or another story. Lots of it, equally, does not – it’s non-fiction of various kinds. Now we know that for each story there are corresponding texts; the question is, which formal properties of those strings of text make them stories?

Now the project isn’t completely hopeless. We may be able to identify passages of dialogue, for example, just by examining formal textual properties (occurrence of quote marks and indentation, or of strings like ‘said’). If we can spot passages of dialogue, we’ll have a pretty good clue that we might be looking at a story.

But we can only go so far with that, and we will certainly be wrong if we claim that the textual properties that suggest dialogue can actually be identified with storyhood. It’s obvious that there could be passages of text with all those properties that are in fact mere gibberish, or a factual report. Moreover, there are many stories that have no dialogue and none of any of the other properties we might pick out. The fundamental problem is that storyhood is about what the text means, and that is not a formal property we can get to just by examination. In the same way, conscious states are conscious because they are about something, and aboutness is not a matter of patterns of neural or electromagnetic activity – though at a practical level we might actually be able to spot conscious activity at success rates that are relatively good, just as we could do a fair job of picking out stories from a mass of text even if we can’t, in fact, read.

Be that as it may, Hunt’s real point is to suggest that electromagnetic field correlates might be better than neural ones. Why (apart from research evidence) does he find that an attractive idea? If I’ve got this right, he is a panpsychist, someone who believes our consciousness is built out of the sparks of lower-grade awareness which are natural properties of matter. There is obviously a question there about how the sparks get put together into richer kinds of consciousness, and Hunt thinks resonance might play a key part. If it’s all about electromagnetic fields, it clearly becomes much easier to see how some sort of resonance might be in play.

I haven’t read enough about Hunt’s ideas to be anywhere near doing them justice; I have no doubt there is a lot of reasonable stuff to be said about and in favour of them. But as a first reaction resonance looks to me like an effect that reduces complexity and richness rather than enhancing them. If the whole brain is pulsing along to the same rhythm that suggests less content than a brain where every bit is doing its own thing. But perhaps that’s a subject I ought to address at better length another time, if I’m going to.

Body Memory

Can you remember with your leg, and is that part of who you are? An interesting piece by Ben Platts-Mills in Psyche suggests it might be so. This is not Psyche the good old journal that went under in 2010, alas, but a promising new venture from those excellent folk at Aeon.

The piece naturally mentions Locke, who put memory at the centre of personal identity, and gives some fascinating snippets about the difficult lives of people with anterograde amnesia, the inability to form new memories. These people have obviously not lost their identities, however they may struggle. I have always been sceptical about the Lockean view more or less for this reason; if I lost my memories, would I stop being me? It doesn’t quite seem so.

In the cases mentioned we don’t quite have that total loss, though; the subjects mainly retain some or all of their existing memories, so a Lockean can simply say those retained memories are what preserve their identity. The inability to form new memories merely stops their identity going through the incremental further changes which are presumably normal (though some of these people do seem to change).

Platts-Mills wants to make the bolder claim that in fact important bits of memory and identity are stored, not in the brain, but in the context and bodies involved. Perhaps a subject does not remember anything about art, but fixed bodily habits draw them towards a studio where the tools and equipment (perhaps we could say the affordances) prompt them to create. This clearly chimes well with much that has been written about various versions of the extended or expanded mind.

I don’t think these examples make the case very strongly, though; Platts-Mills seems to underestimate the brain, in particular assuming that the absence of conscious recall means there’s no brain memory at all. In fact our brains clearly retain vast amounts of stuff we cannot get at consciously. Platts-Mills mentions a former paediatric nurse who cannot remember her career but when handed a baby, holds it correctly and asks good professional questions about it. Fine, but it would take a lot to convince me that her brain is not playing a major role there.

One of the most intriguing and poignant things in the piece is the passing remark that some sufferers from anterograde amnesia actually prefer to forget their earlier lives. That seems difficult to understand, but at any rate those people evidently don’t feel the loss of old memories threatens their existence – or do they consider themselves new people, unrelated to the previous tenant of their body?

That last thought prompts another argument against the extended view. The notion that we could be transferred to someone else’s body (or to no body at all) is universally understood and accepted, at least for the purposes of fiction (and, without meaning to be rude, religion). The idea of moving into a different body is not one of those philosophical notions that you should never start trying to explain at the end of a dinner party; it’s a common feature of SF and magic stories that no-one jibs at. That doesn’t, of course, mean transfers are actually possible, even in theory (I feel fairly sure they aren’t), but it shows at least that people at large do not regard the body as an essential part of their identity.

A short solution

Tim Bollands recently tweeted his short solution to the Hard Problem (I mean, not literally in a tweet – it’s not that short). You might think that was enough to be going on with, but he also provides an argument for a pretty uncompromising kind of panpsychism. I have to applaud his boldness and ingenuity, but unfortunately I part ways with his argument pretty early on. The original tweet is here.

Bollands’ starting premise is that it’s ‘intuitively clear that combining any two non-conscious material objects results in another non-conscious object’. Not really. Combining a non-conscious Victorian lady and a non-conscious bottle of smelling salts might easily produce a conscious being. More seriously, I think most materialists would assume that conscious human beings can be put together by the gradual addition of neural tissue to a foetus that attains consciousness by a similarly gradual process, from dim sensations to complex self-aware thought. It’s not clear to me that that is intuitively untenable, though you could certainly say that the details are currently mysterious.

Bollands believes there are three conclusions we can draw: humans are not conscious; consciousness miraculously emerges, or consciousness is already present in the matter brains are made from. The first, he says, is evidently false (remember that); the second is impossible, given that putting unconscious stuff together can’t produce consciousness; so the third must be true.

That points to some variety of panpsychism, and in fact Bollands goes boldly for the extreme version which attributes to individual particles the same kind of consciousness we have as human beings. In fact, your consciousness is really the consciousness of a single particle within you, which due to the complex processing of the body has come to think of itself as the consciousness of the whole.

I can’t recall any other panpsychist who has been willing to push fully-developed human consciousness right down to the level of elementary particles. I believe most either think consciousness starts somewhere above that level, or suppose that particles have only the dimmest imaginable spark of awareness. Taking this extreme position raises very difficult questions. Which particle is my conscious one? Or all they all being conscious in parallel? Why doesn’t my consciousness feel like the consciousness of a particle? How could all the complex content of my current conscious state be held by a single invariant particle? And why do my particles cease to be conscious when my body is dead, or stunned? You may notice, incidentally, that Bollands’ conclusion seems to be that human beings as such are not, in fact, conscious, contradicting what he said earlier.

Brevity is perhaps the problem here; I don’t think Bollands has enough space to make his answers clear, let alone plausible. Nor is it really clear how all this solves the Hard Problem. Bollands reckons the Hard Problem is analogous to the Combination Problem for panpsychism, which he has solved by denying that any combination occurs (though his particles still somehow benefit from the senses and cognitive apparatus of the whole body). But the Hard Problem isn’t about how particles or nerves come together to create experience, it’s about how phenomenal experience can possibly arise from anything merely physical. That is, to put it no higher, at least as difficult to imagine for a single particle as for a large complex organism.

So I’m not convinced – but I’d welcome more contributions to the debate as bold as this one.

Dismissing materialism

Eric Holloway gives a brisk and entertaining dismissal of all materialist theories of consciousness here, boldly claiming that no materialist theory of consciousness is plausible. I’m not sure his coverage is altogether comprehensive, but let’s have a look at his arguments. He starts out by attacking panpsychism…

One proposed solution is that all particles are conscious. But, in that case, why am I a human instead of a particle? The vast majority of conscious beings in the universe would be particles, and so it is most likely I’d be a particle and not any sort of organic life form.

It’s really a bit of a straw man he’s demolishing here. I’m not sure panpsychists are necessarily committed to the view that particles are conscious (I’m not sure panpsychists are necessarily materialists, either), but I’ve certainly never run across anyone who thinks that the consciousness of a particle and the consciousness of a human being would be the same. It would be more typical to say that particles, or whatever the substrate is, have only a faint glow of awareness, or only a very simple, perhaps binary kind of consciousness. Clearly there’s then a need to explain how the simple kind of consciousness relates or builds up into our kind; not an easy task, but that’s the business panpsychists are in, and they can’t be dismissed without at least looking at their proposals.

Another solution is that certain structures become conscious. But a structure is an abstract entity and there is an untold infinite number of abstract entities.

This is perhaps Holloway’s riposte; he considers this another variety of panpsychism, though as stated it seems to me to encompass a lot of non-panpsychist theories, too. I wholeheartedly agree that conscious beings are not abstract entities, an error which is easy to fall into if you are keen on information or computation as the basis of your theory. But it seems to me hard to fight the idea that certain structural (or perhaps I mean functional) properties instantiated in particular physical beings are what amounts to consciousness. On the one hand there’s a vast wealth of evidence that structures in our brains have a very detailed influence on the content of our experiences. On the other, if there are no structural features, broadly described, that all physical instances of conscious entities have in common, it seems to me hard to avoid radical mysterianism. Even dualists don’t usually believe that consciousness can simply be injected randomly into any physical structure whatever (do they?). Of course we can’t yet say authoritatively what those structural features are.

Another option, says Holloway, is illusionism.

But, if we are allowed to “solve” the problem that way, all problems can be solved by denying them. Again, that is an unsatisfying approach that ‘explains’ by explaining away.

Empty dismissal of consciousness would indeed not amount to much, but again that isn’t what illusionists actually say; typically they offer detailed ideas about why consciousness must be an illusion and varied proposals about how the illusion arises. I think many would agree with David Chalmers that explaining why people do believe in consciousness is currently where some of the most interesting action is to be found.

Some say consciousness is an emergent property of a complex structure of matter… …At what point is a structure complex enough to become conscious?

I agree that complexity alone is not enough, though some people have been attracted to the idea, suggesting that the Internet, for example, might achieve consciousness. A vastly more sophisticated form of the same kind of thinking perhaps underlies the Integrated Information theory. But emergence can mean more than that; in particular it might say that when systems have enough structural complexity of the right kind (frantic hand-waving), they acquire interesting properties (meaningful, experiential ones) that can only be addressed on a higher level of interpretation. That, I think, is true; it just doesn’t help all that much.

Holloway wraps up with another pop at those fully-conscious particles that surely no-one believes in anyway. I don’t think he has shown that no materialist theory can be plausible – the great mainstream ideas of functionalism/computationalism are largely untouched – but I salute the chutzpah of anyone who thinks such an issue can be wrapped up in one side of A4 – and is willing to take it on!

Degrees of Consciousness

An interesting blog post by William Lycan gives a brisk treatment of the interesting question of whether consciousness comes in degrees, or is the kind of thing you either have or don’t. In essence, Lycan thinks the answer depends on what type of consciousness you’re thinking of. He distinguishes three: basic perceptual consciousness, ‘state consciousness’ where we are aware of our own mental state, and phenomenal consciousness. In passing, he raises interesting questions about perceptual consciousness. We can assume that animals, broadly speaking, probably have perceptual, but not state consciousness, which seems primarily if not exclusively a human matter. So what about pain? If an animal is in pain, but doesn’t know it is in pain, does that pain still matter?

Leaving that one aside as an exercise for the reader, Lycan’s answer on degrees is that the first two varieties of consciousness do indeed come in degrees, while the third, phenomenal consciousness, does not. Lycan gives a good ultra-brief summary of the state of play on phenomenal consciousness. Some just deny it (that represents a ‘desperate lunge’ in Lycan’s view); some, finding it undeniable, lunge the other way – or perhaps fall back? – by deciding that materialism is inadequate and that our metaphysics must accommodate irreducibly mental entities. In the middle are all the people who offer some partial or complete explanation of phenomenal consciousness. The leading view, according to Lycan, is something like his own interesting proposal that our introspective categorisation of experience cannot be translated into ordinary language; it’s the untranslatability that gives the appearance of ineffability. There is a fourth position out there beyond the reach of even the most reckless lunge, which is panpsychism; Lycan says he would need stronger arguments for that than he has yet seen.

Getting back to the original question, why does Lycan think the answer is, as it were, ‘yes, yes, no’? In the case of perceptual consciousness, he observes that different animals perceive different quantities of information and make greater or lesser numbers of distinctions. In that sense, at least, it seems hard to argue against consciousness occurring in degrees. He also thinks animals with more senses will have higher degrees of perceptual consciousness. He must, I suppose be thinking here of the animal’s overall, global state of consciousness, though I took the question to be about, for example, perception of a single light, in which case the number of senses is irrelevant (though I think the basic answer remains correct).

On state consciousness, Lycan argues that our perception of our mental states can be dim, vivid, or otherwise varied in degree. There’s variation in actual intensity of the state, but what he’s mainly thinking of is the degree of attention we give it. That’s surely true, but it opens up a couple of cans of worms. For one thing, Lycan has already argued that perceptual states come in degrees by virtue of the amount of information they embody; now state consciousness which is consciousness of a perceptual state can also vary in degree because of the level of attention paid to the perceptual state. That in itself is not a problem, but to me it implies that the variability of state consciousness is really at least a two-dimensional matter. The second question is, if we can invoke attention when it comes to state consciousness, should we not also be invoking it in the case of perceptual consciousness? We can surely pay different degrees of attention to our perceptual inputs. More generally, aren’t there other ways in which consciousness can come in degrees? What about, for example, an epistemic criterion, ie how certain we feel about what we perceive? What about the complexity of the percept, or of our conscious response?

Coming to phenomenal consciousness, the brevity of the piece leaves me less clear about why Lycan thinks it alone fails to come in degrees. He asserts that wherever there is some degree of awareness of one’s own mental state, there is something it’s like for the subject to experience that state. But that’s not enough; it shows that you can have no phenomenal consciousness or some, but not that there’s no way the ‘some’ can vary in degree. Maybe sometimes there are two things it’s like? Lycan argued that perceptual consciousness comes in degrees according to the quantity of information; he didn’t argue that we can have some information or none, and that therefore perceptual consciousness is not a matter of degree. He didn’t simply say that wherever there is some quantity of perceptual information, there is perceptual consciousness.

It is unfortunately very difficult to talk about phenomenal experience. Typically, in fact, we address it through a sort of informal twinning. We speak of red quale, though the red part is really the objective bit that can be explained by science. It seems to me a natural prima facie assumption that phenomenal experience must ‘inherit’ the variability of its objective counterparts. Lycan might say that, even if that were true, it isn’t what we’re really talking about. But I remain to be convinced that phenomenal experience cannot be categorised by degree according to some criteria.

Machines Like Me

Ian McEwan’s latest book Machines Like Me has a humanoid robot as a central character. Unfortunately I don’t think he’s a terrifically interesting robot; he’s not very different to a naïve human in most respects, except for certain unlikely gifts; an ability to discuss literature impressively and an ability to play the stock market with steady success. No real explanation for these superpowers is given; it’s kind of assumed that direct access to huge volumes of information together with a computational brain just naturally make you able to do these things. I don’t think it’s that easy, though in fairness these feats only resemble the common literary trick where our hero’s facility with languages or amazingly retentive memory somehow makes him able to perform brilliantly at tasks that actually require things like insight and originality.

The robot is called Adam; twenty-five of these robots have been created, twelve Adams and thirteen Eves, on the market for a mere £86,000 each. This doesn’t seem to make much commercial sense; if these are prototypes you wouldn’t sell them; if you’re ready to market them you’d be gearing up to make thousands of them, at least. Surely you’d charge more, too – you could easily spend £86k on a fancy new car. But perhaps prices are misleading, because we are in an alternate world.

This is perhaps the nub of it all. The prime difference here is that in the world of the novel, Alan Turing did not die, and was mainly responsible for a much faster development of computers and IT. Plausible humanoid robots have appeared in 1982. This seems to me an unhelpful contribution to the myth of Turing as ‘Mr Computer’. It’s sadly plausible that if he had lived longer he would have had more to contribute; but most likely in other mathematical fields, not in the practical development of the computer, where many others played key roles (as they did at Bletchley). If you ask me, John Von Neumann was more than capable of inventing computers on his own, and in fact in the real postwar world they developed about as fast as they could have done whether Turing was alive or not. McEwan nudges things along a bit more by having Tesla around to work on silicon chips (!) and he brings Demis Hassabis back a bit so he can be Turing’s collaborator (Hassabis evidently doomed to work on machine learning whenever he’s born). This is all a bit silly, but McEwan enjoys it enough to have advanced IT in Exocet missiles give victory to Argentina in the Falklands war, with consequences for British politics which he elaborates in the background of the story. It’s a bit odd that Argentina should get an edge from French IT when we’re being asked to accept that the impeccably British ‘Sir’ Alan Turing was personally responsible for the great technical leap forward which has been made, but it’s pointless to argue over what it is ultimately not much more than fantasy.

Turing appears in the novel, and I hate the way he’s portrayed. One of McEwan’s weaknesses, IMO, is his reverence for the British upper class, and here he makes Sir Alan into the sort of grandee he admires; a lordly fellow with a large house in North London who summons people when he wants information, dismisses them when he’s finished, and hands out moral lectures. Obviously I don’t know what Turing was really like, but to me his papers give the strong impression of an unassuming man of distinctly lower middle class origins; a far more pleasant person than the arrogant one we get in the book.

McEwan doesn’t give us any great insight into how Adam comes to have human-like behaviour (and surely human-like consciousness). His fellow robots are prone to a sort of depression which leads them to a form of suicide; we’re given the suggestion that they all find it hard to deal with human moral ambiguity, though it seems to me that humans in their position (enslaved to morally dubious idiots) might get a bit depressed too. As the novel progresses, Adam’s robotic nature seems to lose McEwan’s interest anyway, as a couple of very human plots increasingly take over the story.

McEwan got into trouble for speaking dismissively of science fiction; is Machines Like Me SF? On a broad reading I’d say why not? – but there is a respectable argument to be made for the narrower view. In my youth the genre was pretty well-defined. There were the great precursors; Jules Verne, H.G. Wells, and perhaps Mary Shelley, but SF was mainly the product of the American pulp magazines of the fifties and sixties, a vigorous tradition that gave rise to Asimov, Clarke, and Heinlein at the head of a host of others. That genre tradition is not extinct, upheld today by, for example, the beautiful stories of Ted Chiang.

At the same time, though, SF concepts have entered mainstream literature in a new way. The Time Traveller’s Wife, for example, obviously makes brilliant use of an SF concept, but does so in the service of a novel which is essentially a love story in the literary mainstream of books about people getting married which goes all the way back to Pamela. There’s a lot to discuss here, but keeping it brief I think the new currency of SF ideas comes from the impact of computer games. The nerdy people who create computer games read SF and use SF concepts; but even non-nerdy people play the games, and in that way they pick up the ideas, so that novelists can now write about, say, a ‘portal’ and feel confident that people will get the idea pretty readily; a novel that has people reliving bits of their lives in an attempt to get them right (like The Seven Deaths Of Evelyn Hardcastle) will not get readers confused the way it once would have done. But that doesn’t really make Evelyn Hardcastle SF.

I think that among other things this wider dispersal of a sort of SF-aware mentality has led to a vast improvement in the robots we see in films and the like. It used to be the case that only one story was allowed: robots take over. Latterly films like Ex Machina or Her have taken a more sophisticated line; the TV series Westworld, though back with the take-over story, explicitly used ideas from Julian Jaynes.

So, I think we can accept that Machines Like Me stands outside the pure genre tradition but benefits from this wider currency of SF ideas. Alas, in spite of that we don’t really get the focus on Adam’s psychology that I should have preferred.

Minds Within Minds

Can there be minds within minds? I think not.

The train of thought I’m pursuing here started in a conversation with a friend (let’s call him Fidel) who somehow manages to remain not only an orthodox member of the Church of England, but one who is apparently quite untroubled by any reservations, doubts, or issues about the theology involved. Now of course we don’t all see Christianity the same way. Maybe Fidel sees it differently from me. For many people (I think) religion seems to be primarily a social organisation of people with a broadly similar vision of what is good, derived mainly from the teachings of Jesus. To me, and I suspect to most people who are likely to read this, it’s primarily a set of propositions, whose truth, falsity, and consistency is the really important matter. To them it’s a club, to us it’s a theory. I reckon the martyrs and inquisitors who formed the religion, who were prepared to die or kill over formal assent to a point of doctrine, were actually closer to my way of thinking on this, but there we are.

Be that as it may, my friend cunningly used the problems (or mysteries) of his religion as a weapon against me. You atheists are so complacent, he said, you think you’ve got it all sorted out with your little clockwork science universe, but you don’t appreciate the deep mysteries, beyond human understanding. There are more things in heaven and earth…
But that isn’t true at all, I said. If you think current physics works like clockwork, you haven’t been paying attention. And there are lots of philosophical problems where I have only reasonable guesses at the answer, or sometimes, even on some fundamental points, little idea at all. Why, I said injudiciously, I don’t understand at all what reality itself even is. I can sort of see that to be real is to be part of a historical process characterised by causality, but what that really means and why there is anything like that, what the hell is really going on with it…? Ah, said Fidel, what a confession! Well, when you’re ready to learn about reality, you know where to come…

I don’t, though. The trouble is, I don’t think Christianity really has any answers for me on this or many other metaphysical points. Maybe it’s just my ignorance of theology talking here, but it seems to me just as Christianity tells us that people are souls and then falls largely silent on how souls and spirits work and what they are, it tells us that God made the world and withholds any useful details of how and what. I know that Buddhism and Taoism tell us pretty clearly that reality is an illusion; that seems to raise other issues but it’s a respectable start. The clearest Christian answer I can come up with is Berkeley’s idealism; that is, that to be real is to be within the mind of God; the world is whatever God imagines or believes it to be.

That means that we ourselves exist only because we are among the contents of God’s mind. Yet we ourselves are minds, so that requires it to be true that minds can exist within minds (yes, at last I am getting to the point). I don’t think a mind can exist within another mind. The simplest way to explain is perhaps as follows; a thought that exists within a mind, that was generated by that mind, belongs to that mind. So if I am sustaining another mind by my thoughts, all of its thoughts are really generated by me, and of course they are within my mind. So they remain my thoughts, the secondary mind has none that are truly its own – and it doesn’t really exist. In the same way, either God is thinking my thoughts for me – in which case I’m just a puppet – or my thoughts are outside his mind, in which case my reality is grounded in something other than the Divine mind.

That might help explain why God would give us free will, and so on; it looks as if Berkeley must have been perfectly wrong: in fact reality is exactly the quality possessed by those things that are outside God’s mind. Anyway, my grip of theology is too weak for my thoughts on the matter to be really worth reading (so I owe you an apology); but the idea of minds within minds arises in AI related philosophy, too; perhaps in relation to Nick Bostrom’s argument that we are almost certainly part of a computer simulation. That argument rests on the idea that future folk with advanced computing tech will produce perfect simulations of societies like their own, which will themselves go on to generate similar simulations, so that most minds, statistically, are likely to be simulated ones. If minds can’t exist within other minds, might we be inclined to doubt that they could arise in mind-like simulations?

Suppose for the sake of argument that we have a conscious mind that is purely computational; its mind arises from the computations it performs. Why should such a program not contain, as some kind of subroutine or something, a distinct process that has the same mind-generating properties? I don’t think the answer is obvious, and it will depend on your view of consciousness. For me it’s all about recognition; a conscious mind is a process whose outputs are conditioned by the recognition of future and imagined entities. So I would see two alternatives; either the computational mind we supposed to exist has one locus of recognition, or two. If it has one, the secondary mind can only be a puppet; if there are two, then whatever the computational relationship, the secondary process is independent in a way that means it isn’t truly within the primary mind.

That doesn’t seem to give me the anti-Bostrom argument I thought might be there, and let’s be honest, the notion of a ‘locus of recognition’ could possibly be attacked. If God were doing my thinking, I feel it would be a bit sharper than this…

Consciousness without Content

Is there such a thing as consciousness without content? If so, is that minimal, empty consciousness, in fact, the constant ground underlying all conscious states? Thomas Metzinger launched an investigation of this question in the third Carnap lecture of a year ago; there’s a summary here (link updated to correct version) in a discussion paper, and a fully worked-up paper will appear next year (hat-tip to Tom Clark for drawing this to my attention). The current paper is exploratory in several respects. One possible result of identifying the hypothetical state of Minimal Phenomenal Experience (MPE) would be to facilitate the identification of neural correlates; Metzinger suggests we might look to the Ascending Reticular Arousal System (ARAS), but offers it only as a plausible place-holder which future research might set aside.

More philosophically, the existence of an underlying conscious state which doesn’t represent anything would be a fatal blow to the view that consciousness is essentially representational in character. On that widely-held view, a mental state that doesn’t feature representation cannot in fact be conscious at all, any more than text that contains no characters is really text. The alternative is to think that consciousness is more like a screen being turned on; we see only (let’s say) a blank white expanse, but the basic state, precondition to the appearance of images, is in place, and similarly MPE can be present without ‘showing us’ anything.

There’s a danger here of getting trapped in an essentially irrelevant argument about the difference between representing nothing and not representing anything, but I think it’s legitimate to preserve representationalism (as an option at least) merely by claiming that even a blank screen necessarily represents something, namely a white void. Metzinger prefers to suggest that the MPE represents “the hidden cause of the ARAS- signal”. That seems implausible to me, as it seems to involve the unlikely idea that we all have constantly in mind a hidden thing most of us have never heard of.

Metzinger does a creditable job of considering evidence from mystic experience as well as dreamless sleep. There is considerable support here for the view that when the mind is cleared, consciousness is not lost but purified. Metzinger rightly points to some difficulties with taking this testimony on board. One is the likelihood of what he calls “theory contamination”. Most contemplatives are deeply involved with mystic or scriptural traditions that already tell them what is to be expected. Second is the problem of pinning down a phenomenal experience with no content, which automatically renders it inexpressible or ineffable. Metzinger makes it clear that this is not any kind of magic or supra-scientific ineffability, just the practical methodological issue that there isn’t, as it were, anything to be said about non-existent content. Third we have an issue Metzinger calls “performative self-contradiction”. Reports of what you get when your mind is emptied make clear that the MPE is timeless, placeless, lacking sensory character, and so on. Metzinger is a little disgusted with this; if the experience was timeless, how do you know it happened last Tuesday between noon and ten past? People keep talking about brilliance and white light, which should not be present in a featureless experience!

Here I think he under-rates the power and indeed the necessity of metaphors. To describe lack of content we fall back on a metaphor of blindness, but to be in darkness might imply simple failure of the eyes, so we tend to go for our being blinded by powerful light and the vastness of space. It’s also possible that white is a default product of our neural systems, which when deprived of input are known to produce moire patterns and blobs of light from nowhere. Here we are undoubtedly getting into the classic problems that affect introspection; you cannot have a cleared mind and at the same time be mentally examining your own phenomenal experience. Metzinger aptly likens these problems to trying to check whether the light in the fridge goes off when the door is closed (I once had one that didn’t, incidentally; it gave itself away by getting warm and unhelpfully heating food placed near it). Those are real problems that have been discussed extensively, but I don’t think they need stop the investigation. In a nutshell, William James was right to say that introspection must be retrospection; we examine our experiences afterwards. This perhaps implies that memory must persist alongside MPE, but that seems OK to me. Without expressing it in quite these terms, Metzinger reaches broadly similar conclusions.

Metzinger is mainly concerned to build a minimal model of the basic MPE, and he comes up with six proposed constraints, giving him in effect not a single MPE state but a 6-dimensional space. The constraints are as follows.

• PC1: Wakefulness: the phenomenal quality of tonic alertness.

• PC2: Low complexity of reportable content: an absence of high-level symbolic mental content (i.e., conceptual or propositional thought or mind wandering), but also of perceptual, sensorimotor, of emotional content (as in full-absorption episodes).

• PC3: Self-luminosity: a phenomenal property of MPE typically described as “radiance”, “brilliance”, or the “clear light” of primordial awareness.

• PC4: Introspective availability: we can sometimes actively direct introspective attention

to MPE and we can distinguish possible states by the degree of actually ongoing access.

• PC5: Epistemicity; as MPE is an epistemic relation (“awareness-of”,) if MPE is successfully introspected, then we would predict a distinct phenomenal character of epistemicity or subjective confidence.

• PC6: Transparency/opacity: like all other phenomenal representations, MPE can vary along a spectrum of opacity and transparency.


At first I feared this was building too much on a foundation not yet well established, but against that Metzinger could fairly ask how he could consolidate without building; what we have is acknowledged to be a sketch for now; and in fact there’s nothing that looks obviously out of place to me.

For Metzinger this investigation of minimal experience follows on from earlier explorations of minimal self-awareness and minimal perspective; this might well be the most significant of the three, however. It opens the way to some testable hypotheses and, since it addresses “pure” consciousness offers a head-on attack route on the core problem of consciousness itself. Next year’s paper is surely going to be worth a look.

Consciousness – where are we?

Interesting to see the review of progress and prospects for the science of consciousness produced by Matthias Michel and others, and particularly the survey that was conducted in parallel. The paper discusses funding and other practical issues, but we’re also given a broad view of the state of play, with the survey recording broadly optimistic views and interestingly picking out Global Workspace proposals as the most favoured theoretical approach. However, consciousness science was rated less rigorous than other fields (which I suppose is probably attributable to the interdisciplinary character of the topic and in particular the impossibility of avoiding ‘messy’ philosophical issues).

Michel suggests that the scientific study of consciousness only really got established a few decades ago, after the grip of behaviourism slackened. In practical terms you can indeed start in the mid twentieth century, but that actually overlooks the early structuralist psychologists a hundred years earlier. Wundt is usually credited as the first truly scientific psychologist, though there were others who adopted the same project around the same time. The investigation of consciousness (in the sense of awareness) was central to their work, and some of their results were of real value. Unfortunately, their introspective methods suffered a fatal loss of credibility, which is what precipitated the extreme reaction against consciousness represented by behaviourism, which eventually suffered an eclipse of its own, leaving the way clear for something like a fresh start, the point Michel takes as the real beginning. I think the longer history is worth remembering because it illustrates a pattern in which periods of energetic growth and optimism are followed by dreadful collapses, a pattern still recognisable in the field, perhaps most obviously in AI, but also in the outbreaks of enthusiasm followed by scepticism that have affected research based on fMRI scanning, for example.

In spite of the ‘winters’ affecting those areas, it is surely the advances in technology that have been responsible for the genuine progress recognised by respondents to the survey. Whatever our doubts about scanning, we undeniably know a lot more about neurology now than we did, even if that sometimes serves to reveal new mysteries, like the uncertain function of the newly-discovered ‘rosehip’ neurons. Similarly, though we don’t have conscious robots (and I think almost everyone now has a more mature sense of what a challenge that is), the project of Artificial General Intelligence has reshaped our understanding. I think, for example, that Daniel Dennett is right to argue that exploration of the wider Frame Problem in AI is not just a problem for computer scientists, but tells us about an important aspect of the human mind we had never really noticed before – its remarkable capacity for dealing with relevance and meaning, something that is to the fore in the fascinating recent development of the pragmatics of language, for example.

I was not really surprised to see the Global Workspace theory achieving top popularity in the survey (Bernard Baars perhaps missing out on a deserved hat-tip here); it’s a down-to-earth approach that makes a lot of sense and is relatively easily recruited as an ally of other theoretical insights. That said, it has been around for a while without much in the way of a breakthrough. It was not that much more surprising to see Integrated Information also doing well, though rated higher by non-professionals (Michel shrewdly suggests that they may be especially impressed by the relatively complex mathematics involved).

However, the survey only featured a very short list of contenders which respondents could vote for. The absence of illusionism and quantum theories is acknowledged; myself I would have included at least two schools of sceptical thought; computationalism/functionalism and other qualia sceptics – though it would be easy to lengthen the list. Most surprising, perhaps, is the absence of panpsychism. Whatever you think about it (and regulars will know I’m not a fan), it’s an idea whose popularity has notably grown in recent years and one whose further development is being actively pursued by capable adherents. I imagine the absence of these theories, and others such as mysterianism and the externalism doughtily championed by Riccardo Manzotti and others, is due to their being relatively hard to vindicate neurologically – though supporters might challenge that. Similarly, its robustly scientific neurological basis must account for the inclusion of ‘local recurrence’ – is that the same as recurrent processing?

It’s only fair to acknowledge the impossibility of coming up with a comprehensive taxonomy of views on consciousness which would satisfy everyone. It would be easy to give a list of twenty or more which merely generated a big argument. (Perhaps a good thing to do, then?)

Mapping the Connectome

Could Artificial Intelligence be the tool that finally allows us understand the natural kind?

We’ve talked before about the possibility; this Nautilus piece explains that scientists at the Max Planck Institute Of Neurobiology have come up with a way of using ‘neural’ networks to map, well, neural networks. There has been a continued improvement in our ability to generate images of neurons and their connections, but using those abilities to put together a complete map is a formidable task; the brain has often been described as the most complex object in the universe and drawing up a full schematic of its connections is potentially enough work for a lifetime. Yet that map may well be crucial; recently the idea of a ‘connectome’ roughly equivalent to the ‘genome’ has become a popular concept, one that suggests such a map may be an essential part of understanding the brain and consciousness. The Max Planck scientists have developed an AI, called ‘SyConn’ which tu4ns images into maps automatically with very high accuracy. In principle I suppose this means we could have a complete human connectome in the not-too-distant future.

How much good would it do us, though? It can’t be bad to have a complete map, but there are three big problems. The first is that we can already be pretty sure that connections between neurons are not the whole story. Neurons come in many different varieties, ones that pretty clearly seem to have different functions – but it’s not really clear what they are. They operate with a vast repertoire of neurotransmitters, and are themselves pretty complex entities that may have genuine computational properties all on their own. They are supported by a population of other, non-networked cells that may have a crucial role in how the overall system works. They seem to influence each other in ways that do not require direct connection; through electromagnetic fields, or even through DNA messages. Some believe that consciousness is not entirely a matter of network computation anyway, but resides in quantum or electrical fields; certainly the artificial networks that were originally inspired by biological neurology seem to behave in ways that are useful but quite distinct from those of human cognition. Benjamin Libet thought that if only he could do the experiment, he might be able to demonstrate that a sliver of the brain cut out from its neighbouring tissue but left in situ would continue to do its job. That, surely, is going too far; the brain didn’t grow all those connections with such care for nothing. The connectome may not be the full story, but it has to be important.

The second problem, though, is that we might be going at the problem from the wrong end. A map of the national road network tells us some useful things about trade, but not what it is or, in the deeper sense, how it works. Without those roads, trade would not occur in anything like the same form; blockages and poor connections may hamper or distort it, and in regions isolated by catastrophic, um, road lesions, trade may cease altogether. Of course to understand things properly we should need to know that there are different kinds of vehicle doing different jobs on those roads, that places may be connected by canals and railways as well as roads, and so on. But more fundamentally, if we start with the map, we have no idea what trade really is. It is, in fact, primarily driven and determined by what people want, need, and believe, and if we fall into the trap of thinking that it is wholly determined by the availability of trucks, goods, and roads we shall make a serious mistake.

Third, and perhaps it’s the same problem in different clothes, we still don’t have any clear and accepted idea of how neurology gives rise to consciousness anyway. We’re not anywhere near being able to look at a network and say, yup, that is (or could be) conscious, if indeed it is ever possible to reach such a conclusion.

So do we really even want a map of the connectome? Oh yes…