Entangled Consciousness

Entangled

Could the answer be quantum physics after all? Various people have suggested that the mystery of consciousness might turn out to be explicable in terms of quantum physics; most notably we have the theory championed ny Roger Penrose and Stuart Hameroff, which suggests that as-yet unknown quantum mechanics might be going on in the microtubules of neural cells. Atai Barkai has kindly shared with me his draft paper which sets out a different take on the matter.

Barkai’s target is subjective experience, or what he defines as the consciousness instance. We may be conscious of things in the world out there, but that kind of consciousness always involves an element of inference, explicit or not; the consciousness instance is free of those uncertainties, consisting only of the direct experience of which we can be certain. Some hardy sceptics would deny that there is anything in consciousness that we can be that sure of, but I think it is clear enough what distinction Barkai is making and that it is, as it were, first order experience he is aiming at.

The paper puts a complex case very carefully, but flinging caution to the winds I believe the gist goes like this. Typically, those who see a problem with qualia and subjective experience think it lies outside the account given by physics, which arguably therefore needs some extension. Barkai agrees that subjectivity is outside the classical account and that a more capacious concept of the universe is needed to accommodate it; but he thinks that quantum entanglement can already, properly read, provide what is needed.

It’s true that philosophical discussions generally either treat classical physics as being the whole subject or ignore any implications that might arise from the less intuitive aspects of quantum physics. Besides entanglement, Barkai discusses free will and its possible connection with the indeterminism of quantum physics. If there really is indeterminism in quantum physics; on this and other points I simply don’t have a clear enough grasp of the science to tackle the questions effectively (better informed comments would be welcome).

Entanglement, as I understand it, means that the states of two particles (or in principle, larger bodies) may be strongly connected in ways that do not depend on normal classical interaction and are not affected by distance. This means that information can in theory be transferred any distance, instantly and infallibly, which opens up the theoretical possibility of quantum computing, delivering the instant solution to computable problems of any finite size. Whether the brain might be doing this kind of thing is a question which Barkai leaves as an interesting speculation.

The main problem for me in the idea that entanglement explains subjectivity is understanding intuitively how that could be so. Entanglement seems to offer the possibility that there might be more information in our mental operations than classical physics could account for; that feels helpful, but does it explain the qualitative difference between mere dead data and the ‘something it is like’ of subjectivity? I don’t reject the idea, but I don’t think I fully grasp it either.

There is also a more practical objection. Quantum interactions are relevant at a microscopic level, but as we ascend to normal scales, it is generally assumed that wave forms collapse and we slip quickly back into a world where classical physics reigns. It is generally thought that neurons, let alone the whole brain, are just too big and hot for quantum interactions to have any relevance. This is why Penrose looks for new quantum mechanics in tiny microtubules rather than resting on what known quantum mechanics can provide.

As I say, I’m not really competent to assess these issues, but there is a neatness and a surprising novelty about Barkai’s take that suggests it merits further discussion.

Unconscious and Conscious

What if consciousness is just a product of our non-conscious brain, ask Peter Halligan and David A Oakley? But how could it be anything else? If we take consciousness to be a product of brain processes at all, it can only come from non-conscious ones. If consciousness had to come from processes that were themselves already conscious, we should have a bit of a problem on our hands. Granted, it is in one sense remarkable that the vivid light of conscious experience comes from stuff that is at some level essentially just simple physical matter – it might even be that incredulity over that is an important motivator for panpsychists, who find it easier to believe that everything is at least a bit conscious. But most of us take that gap to be the evident core fact that we’re mainly trying to explain when we debate consciousness.

Of course, Halligan and Oakley mean something more than that. Their real point is a sceptical, epiphenomenalist one; that is, they believe consciousness is ineffective. All the decisions are really made by unconscious processes; consciousness notices what is happening and, like an existentialist, claims the act as its own after the fact (though of course the existentialist does not do it automatically). There is of course a lot of evidence that our behaviour is frequently influenced by factors we are not consciously aware of, and that we happily make up reasons for what we have done which are not the true ones. But ‘frequently’ is not ‘invariably’ and in fact there seem to be plenty of cases where, for example, our conscious understanding of a written text really does change our behaviour and our emotional states. I would find it pretty hard to think that my understanding of an email with important news from a friend was somehow not conscious, or that my conscious comprehension was irrelevant to my subsequent behaviour. That is the kind of unlikely stuff that the behaviourists ultimately failed to sell us. Halligan and Oakley want to go quite a way down that same unpromising path, suggesting that it is actually unhelpful to draw a sharp distinction between conscious and unconscious. They will allow a kind of continuum, but to me it seems clear that there is a pretty sharp distinction between the conscious, detached plans of human beings and the instinct-driven, environment-controlled behaviour of animals, one it is unhelpful to blur or ignore.

One distinction that I think would be helpful here is between conscious and what I’ll call self-conscious states. If I make a self-conscious decision I’m aware of making it; but I can also just make the decision; in fact, I can just act. In my view, cases where I just make the decision in that unreflecting way may still be conscious; but I suspect that Halligan and Oakley (like Higher Order theorists) accept only self-conscious decisions as properly conscious ones.

Interesting to compare the case put by Peter Carruthers in Scientific American recently; he argues that the whole idea of conscious thought is an error. He introduces the useful idea of the Global Workspace proposed by Bernard Baars and others; a place where data from different senses can be juggled and combined. To be in the workspace is to be among the contents of consciousness, but Carruthers believes only sensory items get in there. He’s happy to allow bits of ‘inner speech’ or mental visualisation, deceptive items that mislead us about our own thoughts; but he won’t allow completely abstract stuff (again, you may see some resemblance to the ‘mentalistic’ entities disallowed by the behaviourists). I don’t really know why not; if abstractions never enter consciousness or the workspace, how is it that we’re even talking about them?

Carruthers thinks our ‘Theory Of Mind’ faculty misleads us; as a quick heuristic it’s best to assume that others know their own mental states accurately, and so, it’s natural for us to think that we do too: that we have direct access and cannot be wrong about whether we, for example, feel hungry. But he thinks we know much less than we suppose about our own motives and mental states. On this he seems a little more moderate than Halligan and Oakley, allowing that conscious reflection can sometimes affect what we do.

I think the truth is that our mental, and even our conscious processes, are in fact much more complex and multi-layered than these discussions suggest. Let’s consider the causal efficacy of conscious thought in simple arithmetic. When I add two to two in my head and get four, have the conscious thoughts about the sum caused the conscious thought of the answer, or was there an underlying non-conscious process which simply waved flags at a couple of points?

Well, I certainly can do the thing epiphenomenally. I can call up a mental picture of the written characters, for example, and step through them one by one. In that case the images do not directly cause each other. If I mentally visualise two balls and then another two balls and then mentally count them, perhaps that is somewhat different? Can I think of two abstractly and then notice conceptually that its reduplication is identical with the entity ‘four’? Carruthers would deny it, I think, but I’m not quite sure. If I can, what causal chain is operating? At this point it becomes clear that I really have little idea of how I normally do arithmetic, which I suppose scores a point for the sceptics. The case of two plus two being four is perhaps a bad example, because it is so thoroughly remembered I simply replay it, verbally, visually, implicitly, abstractly or however I do it. What if I were multiplying 364 by 5? The introspective truth seems to be that I do something akin to running an algorithm by hand. I split the calculation into separate multiplications, whose answer I mainly draw direct from memory, and then I try to remember results and add them, again usually relying on remembered results. Does my thinking of four times five and recalling that the answer is twenty mean there was a causal link between the conscious thought and the conscious result? I think there may be such a link, but frustratingly if I use my brain in a slightly different way there may not be a direct one, or there may be a direct one which is not of the appropriate kind (because, say, the causal link is direct but irrelevant to the mathematical truth of the conclusion).

Having done all that, I realise that since I’m multiplying by five, I could have simply multiplied by ten, which can be done by simply adding a zero (Is that done visually – is it necessarily done visually? Some people cannot conjure up mental images at all.) and halving the result. Where did that little tactic come from? Did I think of it consciously, and was its arrival in reportable condition causally derived from my wondering about how best to do the sum (in words, or not in words?) or was it thrust into a kind of mental in-tray (rather spookily) by an unconscious part of my brain which has been vaguely searching around for any good tricks for the last couple of minutes? Unfortunately I think it could have happened in any one of a dozen different ways, some of which probably involve causally effective conscious states while others may not.

In the end the biggest difference between me and the sceptics may come down to what we are prepared to call conscious; they only want the states I’m calling self-conscious. Suppose we take it that there is indeed a metaphorical or functional space where mental items become ‘available’ (in some sense I leave unclarified) to influence our behaviour. It could indeed be a Global Workspace, but need not commit us to all the details of that theory. Then they allow at most those items actually within the space to be conscious, while I would allow anything capable of entering it. My intuition here is that the true borderline falls, not between those mental items I’m aware of and those I merely have, but between those I could become aware of if my attention were suitably directed, and those I could never extract from the regions where they are processed, however I thought about it. When I play tennis, I may consciously plan a strategy, but I also consciously choose where to send the ball on the spur of the moment; that is not normally a self-conscious decision, but if I stopped to review it it easily could become one – whereas the murky Freudian reasons why I particularly want to win the game cannot be easily accessed (without lengthy therapy at any rate) and the processes my visual cortex used to work out the ball’s position are forever denied me.

My New Year resolution is to give up introspection before I plunge into some neo-Humean abyss of self-doubt.

Success with Consciousness

What would success look like, when it comes to the question of consciousness?

Of course it depends which of the many intersecting tribes who dispute or share the territory you belong to. Robot builders and AI designers have known since Turing that their goal is a machine whose responses cannot be distinguished from those of a human being. There’s a lot wrong with the Turing Test, but I still think it’s true that if we had a humanoid robot that could walk and talk and interact like a human being in a wide range of circumstances, most people wouldn’t question whether it was conscious or not. We’d like a theory to go with our robot, but the main thing is whether it works. Even if we knew it worked in ways that were totally unlike biological brains, it wouldn’t matter – planes don’t fly the birds do, but so what, it’s still flying. Of course we’re a million miles from such a perfectly human robot, but we sort of know where we’re going.

It’s a little harder for neurologists; they can’t rely quite so heavily on a practical demonstration, and reverse engineering consciousness is tough. Still, there are some feats that could be pulled off that would pretty much suggest the neurologists have got it. If we could reliably read off from some scanner the contents of anyone’s mind, and better yet, insert thoughts and images at will, it would be hard to deny that the veil of mystery had been drawn back quite a distance. It would have to be a general purpose scanner, though; one that worked straight away for all thoughts in any person’s brain. People have already demonstrated that they can record a pattern from one subject’s brain when that subject is thinking a known thought, and then, in the same session with the same subject, recognise that same pattern as a sign of the same thought.  That is a much lesser achievement, and I’m not sure it gets you a cigar, let alone the Nobel prize.

What about the poor philosophers? They have no way to mount a practical demonstration, and in fact no such demonstration can save them from their difficulties. The perfectly human robot does not settle things for them; they tell it they can see it appears to be able to perform a range of ‘easy’ cognitive tasks, but whether it really knows anything at all about what it’s doing is another matter. They doubt whether it really has subjective experience, even though it assures them that it’s own introspective evidence says it does. The engineer sitting with them points out that some of the philosophers probably doubt whether he has subjective experience.

“Oh, we do,” they admit, “in fact many of us are pretty sure we don’t have it ourselves. But somehow that doesn’t seem to make it any easier to wrap things up.”

Nor are the philosophers silenced by the neurologists’ scanner, which reveals that an apparently comatose patient is in fact fully aware and thinking of Christmas. The neurologists wake up the subject, who readily confirms that their report is exactly correct. But how do they know, ask the philosophers; you could be recording an analogue of experience which gets tipped into memory only at the point of waking, or your scanner could be conditioning memory directly without any actual experience. The subject could be having zomboid dreams, which convey neural data, but no actual experience.

“No, they really couldn’t,” protest the neurologists, but in vain.

So where do philosophers look for satisfaction? Of course, the best thing of all is to know the correct answer. But you can only believe that you know. If knowledge requires you to know that you know, you’re plummeting into an infinite regress; if knowing requires appropriate justification then you’re into a worm-can opening session about justification of which there is no end. Anyway, even the most self-sufficient of us would like others to agree, if not recognise the brilliance of our solution.

Unfortunately you cannot make people agree with you about philosophy. Physicists can set off a bomb to end the argument about whether e really equals mc squared; the best philosophers can do is derive melancholy satisfaction from the belief that in fifty years someone will probably be quoting their arguments as common sense, though they will not remember who invented them, or that anyone did. Some people will happen to agree with you already of course, which is nice, but your arguments will convert no-one; not only can you not get people to accept your case; you probably can’t even get them to read your paper. I sympathised recently with a tweet from Keith Frankish lamenting how he has to endlessly revisit bits of argument against his theory of illusionism, one’s he’s dealt with many times before (oh, but illusions require consciousness; oh, if it’s an illusion, who’s being deceived…). That must indeed be frustrating, but to be honest it’s probably worse than that; how many people, having had the counter-arguments laid out yet again, accept them or remember them accurately? The task resembles that of Sisyphus, whose punishment in Hades was to roll a boulder up a hill it invariably rolled down again. Camus told us we must imagine Sisyphus happy, but that itself is a mental task which I find undoes itself every time I stop concentrating…

I suppose you could say that if you have to bring out your counter-arguments regularly, that itself is some indicator of having achieved some recognition. Let’s be honest, attention is what everyone wants; moral philosophers all want a mention on The Good Place, and I suppose philosophers of mind would all want to be namechecked on Westworld if Julian Jaynes hadn’t unaccountably got that one sewn up.

Since no-one is going to agree with you, except that sterling band who reached similar conclusions independently, perhaps the best thing is to get your name associated with a colourful thought experiment that lots of people want to refute. Perhaps that’s why the subject of consciousness is so full of them, from the Chinese Room to Mary the Colour Scientist, and so on. Your name gets repeated and cited that way, although there is a slight danger that it ends up being connected forever with a point of view you have since moved on from, as I believe is the case with Frank Jackson himself, who no longer endorses the knowledge argument exemplified by the Mary story.

Honestly, though, being the author of a widely contested idea is second best to being the author of a universally accepted one. There’s a Borges story about a deposed prince thrown into a cell where all he can see is a caged jaguar. Gradually he realises that the secrets of the cosmos are encoded in the jaguar’s spots, which he learns to read; eventually he knows the words of magic which would cast down his rival’s palace and restore him to power; but in learning these secrets he has attained enlightenment and no longer cares about earthly matters. I bet every philosopher who reads this story feels a mild regret; yes, of course enlightenment is great, but if only my insights allowed me to throw down a couple of palaces? That bomb thing really kicked serious ass for the physicists; if I could make something go bang, I can’t help feeling people would be a little more attentive to my corpus of work on synthetic neo-dualism…

Actually, the philosophers are not the most hopeless tribe; arguably the novelists are also engaged in a long investigation of consciousness; but those people love the mystery and don’t even pretend to want a solution. I think they really enjoy making things more complicated and even see a kind of liberation in the indefinite exploration; what can you say for people like that!

Good vibrations?

Is resonance the answer? Tam Hunt thinks it might be.

Now the idea that synchronised neuron firing might have something to do with consciousness is not new. Veterans of consciousness will recall a time when 40 hertz was thought to be the special, almost magical frequency that generated consciousness; people like Francis Crick thought it might be the key to the unity of consciousness and a solution to the binding problem. I don’t know what the current state of neurology on this is, but it honestly seems most likely to me that 40 hertz, or a rate in that neighbourhood, is simply what the brain does when it’s thrumming along normally. People who thought it was important were making a mistake akin to taking a car’s engine noise for a functional component (hey, no noise, no move!).

Hunt has a bit more to offer than simply speculating that resonance is important somehow, though. He links resonance with panpsychism, suggesting that neurons have little sparks of consciousness and resonance is the way they get recruited into the larger forms of awareness we experience. While I can see the intuitive appeal of the idea, it seems to me there are a lot of essential explanatory pieces missing from the picture.

The most fundamental problem here is that I simply don’t see how resonance between neurons could ever explain subjective experience. Resonance is a physical phenomenon, and the problem is that physical stuff just doesn’t seem to supply the ‘what-it-is-like’ special quality of experience. Hard to see why co-ordinated firing is any better in that essential respect than unco-ordinated. In fact, in one respect resonance is especially unsuitable; resonance is by its nature stable. If it doesn’t continue for at least a short period, you haven’t really got resonance. Yet consciousness often seems fleeting and flowing, moving instantaneously and continuously between different states of awareness.

There’s also, I think, some work needed on the role of neurons. First, how come our panpsychist ascent starts with neurons? We either need an account of how we get from particles up to neurons, or an account of why consciousness only starts when we get up to neurons (pretty complex entities, as we kee finding out). Second, if resonating neurons are generating consciousness, how does that sit with their day job? We know that neurons transmit signals from the senses and to the muscles, and we know that they do various kinds of processing. Do they generate consciousness at the same time, or is that delegated to a set of neurons that don’t have to do processing?  If the resonance only makes content conscious, how is the content determined, and how are the resonance and the processing linked? How does resonance occur, anyway? Is it enough for neurons to be in sync, so that two groups in different hemispheres can support the same resonance? Can a group of neurons in my brain resonate with a group in yours? If there has to be some causal linkage or neuronal connection, isn’t that underlying mechanism the real seat of consciousness, with the resonance just a byproduct?

What about that panpsychist recruitment – how does it work? Hunt says an electron or an atom has a tiny amount of consciousness, but what does ‘tiny’ mean? Is it smaller in intensity, complexity, content, or what? If it were simply intensity, then it seems easy enough to see how a lot of tiny amounts could add up to something more powerful, just as a lot of small lights can achieve the effect of a single big one. But for human consciousness to be no more than the consciousness of an atom with the volume turned up doesn’t seem very satisfactory. If, on the other hand, we’re looking for more complexity and structure, how can resonance, which has the neurons all doing the same thing at the same time, possibly deliver that?

I don’t doubt that Hunt has answers to many of these questions, and perhaps it’s not reasonable to expect them all in a short article for a general readership. For me to suspend my disbelief, though, I do really need a credible hint as to the metaphysical core of the thinking. How does the purely physical phenomenon of resonance produce the phenomenal aspect of my conscious experience, the bit that goes beyond mere data registration and transmutes into the ineffable experience I am having?

A different Difference Engine

Consciousness as organised energy is the basis of a new theory offered by Robert Pepperell; the full paper is here, with a magazine article treatment here.

Pepperell suggests that we can see energy as difference, or more particularly actualised difference – that is to say, differences in the real world, not differences between abstract entities. We can imagine easily enough that the potential energy of a ball at the top of a slope is a matter of the difference between the top and bottom of the slope, and Pepperell contends that the same is equally true of the kinetic energy of the ball actually rolling down. I’m not sure that all actualised differences are energy, but that’s probably just a matter of tightening some definitions; we see what Pepperell is getting at. He says that the term ‘actualised difference’ is intended  to capture the active, antagonistic nature of energy.

He rejects the idea that the brain is essentially about information processing, suggesting instead that it processes energy. He rightly points to the differing ways in which the word ‘information’ is used, but if I understand correctly his chief objection is that information is abstract, whereas the processing of the brain deals in actuality; in the actualised difference of energy, in fact.

This is crucial because Pepperell wants us to agree that ‘there is something it is like’ to undergo actualised difference. He claims we can infer this by examining nature; I’m not sure everyone will readily agree, but the idea is that we can see that what it is like to be a rope under tension differs from what it is like to be the same rope when slack. It’s important to be clear that he’s not saying the rope is conscious; having a ‘what it is like’ is for him a more primitive level of experience, perhaps not totally unlike some of the elementary states of awareness that appear in panpsychist theories (but that’s my comparison, not his).

To get the intuition that Pepperell is calling on here, I think we need to pay attention to his exclusion of abstract entities. Information-based theories take us straight to the abstract level, whereas I think  Pepperell sees ‘something it is like’ as being a natural concomitant of actuality, or at any rate of actualised difference. To him this seems to be evident from simple examination, but again I think many will simply reject the idea as contrary to their own intuitions.

If we’re ready to grant that much, we can then move on to the second part of the theory, which takes consciousness to be a reflexive form of what-it-is-likeness. Pepperell cites the example of the feedback patterns which can be generated by pointing a video camera at its own output on a screen. I don’t think we are to take this analogy too literally, but it shows how a self-referential system can generate output that goes far beyond registration if the input. The proposal also plays into a relatively common intuition that consciousness, or at least some forms of it, are self-referring or second order, as in the family of HOT and HOP theories.

Taken all in all, we are of course a long way from a knock-down argument here; in fact it seems to me that Pepperell does not spend enough time adumbrating the parts of his theory that most need clarification and defence. I’m left not altogether seeing why we should think it is like anything to be a rope in any state, nor convinced that reflexive awareness of our awareness has any particular part to play in the generation of subjective consciousness (it obviously has something to do with self-awareness). But the idea that ‘something it is like’ is an inherent part of actuality does have some intuitive appeal for me, and the idea of using that as a base for the construction of more complex forms of consciousness is a tantalising start, at least.

 

Not Objectifiable

A bold draft paper from Tom Clark tackles the explanatory gap between experience and the world as described by physics. He says it’s all about the representational relation.

Clark’s approach is sensible in tone and intention. He rejects scepticism about the issue and accepts that there is a problem about the ‘what it is like’ of experience, while also seeking to avoid dualism or more radical metaphysical theories. He asserts that experience is not to be found anywhere in the material world, nor identified in any simple way with any physical item; it is therefore  outside the account given by physics. This should not worry us, though, any more than it worries us that numbers or abstract concepts cannot be located in space; experience is real, but it is a representational reality that exists only for the conscious subjects having it.

The case is set out clearly and has an undeniable appeal. The main underlying problem, I would say, is that it skates quite lightly past some really tough philosophical questions about what representation really is and how it works, and the ontological status of experience and representations. We’re left with the impression that these are solvable enough when we get round to them, though in fact a vast amount of unavailing philosophical labour has gone into these areas over the years. If you wanted to be unkind, you could say that Clark defers some important issues about consciousness to metaphysics the way earlier generations might have deferred them to theology. On the other hand, isn’t that exactly where they should be deferred to?

I don’t by any means suggest that these deferred problems, if that’s a fair way to describe them, make Clark’s position untenable – I find it quite congenial in general – but they possibly leave him with a couple of vulnerable spots. First, he doesn’t want to be a dualist, but he seems in danger of being backed into it. He says that experience is not to be located in the physical world – so where is it, if not in another world? We can resolutely deny that there is a second world, but there has to be in some sense some domain or mode in which experience exists or subsists; why can’t we call that a world? To me, if I’m honest, the topic of dualism versus monism is tired and less significant than we might think, but there is surely some ontological mystery here, and in a vigorous common room fight I reckon Clark would find himself being accused of Platonism (or perhaps congratulated for it) or something similar.

The second vulnerability is whether representation can really play the role Clark wants it to. His version of experience is outside physics, and so in itself it plays no part in the physical world. Yet representations of things in my mind certainly influence my physical behaviour, don’t they? The keyboard before me is represented in my mind and that representation affects where my fingers are going next. But it can’t be the pure experience that does that, because it is outside the physical world. We might get round this if we specify two kinds of representational content, but if we do that the hard causal kind of representation starts to look like the real one and it may become debatable in what sense we can still plausibly or illuminatingly say that experience itself is representational.

Clark makes some interesting remarks in this connection, suggesting there is a sort of ascent going on, whereby we start with folk-physical descriptions rooted in intersubjective consensus, quite close in some sense to the inaccessibly private experience, and then move by stages towards the scientific account by gradually adopting more objective terms. I’m not quite sure exactly how this works, but it seems an intriguing perspective that might provide a path towards some useful insight.

Are highs really high?

Are psychedelic states induced by drugs such as LSD or magic mushrooms really higher states of consciousness? What do they tell us about leading theories of consciousness? An ambitious paper by Tim Bayne and Olivia Carter sets out to give the answers.

You might well think there is an insuperable difficulty here just in defining what ‘high’ means. The human love of spatial metaphors means that height has been pressed into service several times in relevant ways. There’s being in high spirits (because cheerful people are sort of bouncy while miserable ones are droopy and lie down a lot?). To have one’s mind on higher things is to detach from the kind of concerns that directly relate to survival (because instead of looking for edible roots or the tracks of an edible animal, we’re looking up doing astronomy?). Higher things particularly include spiritual matters (because Heaven is literally up there?).  In philosophy of mind we also have higher order theories, which would make consciousness a matter of thoughts about thoughts, or something like that. Quite why meta-thoughts are higher than ordinary ones eludes me. I think of it as diagrammatic, a sort of table with one line for first order, another above for second, and so on. I can’t really say why the second order shouldn’t come after and hence below the first; perhaps it has to do with the same thinking that has larger values above smaller ones on a graph, though there we’re doubly metaphoric, because on a flat piece of paper the larger graph values are not literally higher, just further away from me (on a screen, now… but enough).

Bayne and Carter in passing suggest that ‘the state associated with mild sedation is intuitively lower than the state associated with ordinary waking awareness’. I have to say that I don’t share that intuition; sedation seems to me less engaged and less clear than wakefulness but not lower. The etymology of the word suggests that sedated people are slower, or if we push further, more likely to be sitting down – aha, so that’s it! But Bayne and Carter’s main argument, which I think is well supported by the evidence, is that no single dimension can capture the many ways in which conscious states can vary. They try to uphold a distinction between states and contents, which is broadly useful, though I’m not sure it’s completely watertight. It may be difficult to draw a clean distinction, for example, between a mind in a spiritual state and one which contains thoughts of spiritual things.

Bayne and Carter are not silly enough to get lost in the philosophical forest, however, and turn to the interesting and better defined question of whether people in psychedelic states can perceive or understand things that others cannot. They look at LSD and psilocybin and review two kinds of research, questionnaire based and actual performance test. This allows them to contrast what people thought drugs did for their abilities with what the drugs actually did.

So, for example, people in psychedelic states had a more vivid experience of colours and felt they were perceiving more, but were not actually able to discriminate better in practice. That doesn’t necessarily mean they were wrong, exactly; perhaps they could simply have been having a more detailed phenomenal experience without more objective content; better qualia without better data/output control. Could we, in fact, take the contrast between experience and performance as an unusually hard piece of evidence for the existence of qualia? The snag might be that subjects ought not to be able even to report the enhanced experience, but still it seems suggestive, especially about the meta-problem of why people may think qualia are real. To complicate the picture, it seems there is relatively good support for the idea that psychedelics may increase the rate of data uptake, as shown by such measures as rates of saccadic eye movements (the involuntary instant shifts that happen when your eyes snap to something interesting like a flash of light automatically).

Generally psychedelics would seem to impair cognitive function, though certain kinds of working memory are spared; in particular (unsurprisingly) they reduce the ability to concentrate or focus. People feel that their creative capacities are enhanced by psychedelics; however, while they do seem to improve the ability to come up with new ideas they also diminish the ability to tell the difference between good ideas and bad, which is also an important part of successful creativity.

One interesting area is that psychedelics facilitate an experience of unity, with time stopping or being replaced by a sense of eternity, while physical boundaries disappear and are overtaken by a sense of cosmic unity. Unfortunately there is no scientific test to establish whether these are perceptions of a deeper truth or vague delusions, though we know people attach significance and value to these experiences.

In a final section Bayne and Carter take their main conclusion – that consciousness is multidimensional – and draw some possibly controversial conclusions. First, they note that that the use of psychedelics has been advocated as a treatment for certain disorders of consciousness. They think their findings run contrary to this idea, because it cannot be taken for granted that the states induced by the drugs are higher in any simple sense. The conclusion is perhaps too sweeping; I should say instead that the therapeutic use of psychedelic drugs needs to be supported by a robust case which doesn’t rest on any simple sense of higher or lower states.

Bayne and Carter also think their conclusion suggests consciousness is too complex and variable for global workspace theories to be correct. These theories broadly say that consciousness provides a place where different sensory inputs and mental contents can interact to produce coherent direction, but simply being available or not available to influence cognition seems not to capture the polyvalence of consciousness, in Bayne and Carter’s view.

They also think that multidimensionality goes against the Integrated Information Theory (IIT). IIT gives a single value for level of consciousness, and so is evidently a single dimension theory. One way to achieve a reconciliation might be to accept multidimensionality and say that IIT defines only one of those dimensions, albeit an important one (“awareness”?). That might involve reducing IIT’s claims in a way its proponents would be unwilling to do, however.

 

What’s Wrong with Dualism?

I had an email exchange with Philip Calcott recently about dualism; here’s an edited version. (Favouring my bits of the dialogue, of course!)

Philip: The main issue that puzzles me regarding consciousness is why most people in the field are so wedded to physicalism, and why substance dualism is so out of favour. It seems to me that there is indeed a huge explanatory gap – how can any physical process explain this extraordinary (and completely unexpected on physicalism) “thing” that is conscious experience?

It seems to me that there are three sorts of gaps in our knowledge:

1. I don’t know the answer to that, but others do. Just let me just google it (the exact height of Everest might be an example)
2. No one yet knows the answer to that, but we have a path towards finding the answer, and we are confident that we will discover the answer, and that this answer lies within the realm of physics (the mechanism behind high temperature superconductivity might be an example here)
3. No one can even lay out a path towards discovering the answer to this problem (consciousness)

Chalmers seems to classify consciousness as a “class 3 ignorance” problem (along the lines above). He then adopts a panpsychism approach to solve this. We have a fundamental property of nature that exhibits itself only through consciousness, and it is impossible to detect its interaction with the rest of physics in any way. How is this different from Descartes’ Soul? Basically Chalmers has produced something he claims to be still physical – but which is effectively identical to a non-physical entity.

So, why is dualism so unpopular?

I think there are two reasons. The first is not an explicit philosophical point, but more a matter of the intellectual background. In theory there are many possible versions of dualism, but what people usually want to reject when they reject it is traditional religion and traditional ideas about spirits and ghosts. A lot of people have strong feelings about this for personal or historical reasons that give an edge to their views. I suspect, for example, that this might be why Dan Dennett gives Descartes more of a beating over dualism than, in my opinion at least, he really deserves.

Second, though, dualism just doesn’t work very well. Nobody has much to offer by way of explaining how the second world or the second substance might work (certainly nothing remotely comparable to the well-developed and comprehensive account given by physics). If we could make predictions and do some maths about spirits or the second world, things would look better; as it is, it looks as if dualism just consigns the difficult issues to another world where it’s sort of presumed no explanations are required. Then again, if we could do the maths, why would we call it dualism rather than an extension of the physical, monist story?

That leads us on to the other bad problem, of how the two substances or worlds interact, one that has been a conspicuous difficulty since Descartes. We can take the view that they don’t really interact causally but perhaps run alongside each other in harmony, as Leibniz suggested; but then there seems to be little point in talking about the second world, as it explains nothing that happens and none of what we do or say. This is quite implausible to me, too, if we’re thinking particularly of subjective experience or qualia. When I am looking at a red apple, it seems to me that every bit of my subjective experience of the colour might influence my decision about whether to pick up the apple or not. Nothing in my mental world seems to be sealed off from my behaviour.

If we think there is causal interaction, then again we seem to be looking for an extension of monist physics rather than a dualism.

Yet it won’t quite do, will it, to say that the physics is all there is to it?

My view is that in fact what’s going on is that we are addressing a question which physics cannot explain, not because physics is faulty or inadequate, but because the question is outside its scope. In terms of physics, we’ve got a type 3 problem; in terms of metaphysics, I hope it’s type 2, though there are some rather discouraging arguments that suggest things are worse than that.

I think the element of mystery in conscious experience is in fact its particularity, its actual reality. All the general features can be explained at a theoretical level by physics, but not why this specific experience is real and being had by me. This is part of a more general mystery of reality, including the questions of why the world is like this in particular and not like something else, or like nothing. We try to naturalise these questions, typically by suggesting that reality is essentially historical, that things are like this because they were previously like that, so that the ultimate explanations lie in the origin of the cosmos, but I don’t think that strategy works very well.

There only seem to be two styles of explanation available here. One is the purely rational kind of reasoning you get in maths. The other is empirical observation. Neither is any good in this context; empirical explanations simply defer the issue backwards by explaining things as they are in terms of things as they once were. There’s no end to that deferral. A priori logical reasoning, on the other hand, delivers only eternal truths, whereas the whole point about reality and my experience is that it isn’t fixed and eternal; it could have been otherwise. People like Stephen Hawking try to deploy both methods, using empirical science to defer the ultimate answer back in time to a misty primordial period, a hypothetical land created by heroic backward extrapolation, where it is somehow meant to turn into a mathematical issue, but even if you could make that work I think it would be unsatisfying as an explanation of the nature of my experience here and now.

I conclude that to deal with this properly we really need a different way of thinking. I fear it might be that all we can do is contemplate the matter and hope pre- or post-theoretical enlightenment dawns, in a sort of Taoist way; but I continue to hope that eventually that one weird trick of metaphysical argument that cracks the issue will occur to someone, because like anyone brought up in the western tradition I really want to get it all back to territory where we can write out the rules and even do some maths!

As I’ve said, this all raises another question, namely why we bother about monism versus dualism at all. Most people realise that there is no single account of the world that covers everything. Besides concrete physical objects we have to consider the abstract entities; those dealt with in maths, for example, and many other fields. Any system of metaphysics which isn’t intolerably flat and limited is going to have some features that would entitle us to call it at least loosely dualist. On the other hand, everything is part of the cosmos, broadly understood, and everything is in some way related to the other contents of those cosmos. So we can equally say that any sufficiently comprehensive system can, at least loosely, be described as monist too; in the end there is only one world. Any reasonable theory will be a bit dualist and a bit monist in some respects.

That being so, the pure metaphysical question of monism versus dualism begins to look rather academic, more about nomenclature than substance. The real interest is in whether your dualism or your monism is any good as an elegant and effective explanation. In that competition materialism, which we tend to call monist, just looks to be an awfully long way ahead.

The Map of Feelings

An intriguing study by Nummenmaa et al (paper here) offers us a new map of human feelings, which it groups into five main areas; positive emotions, negative emotions, cognitive operations, homeostatic functions, and sensations of illness. The hundred feelings used to map the territory are all associated with physical regions of the human body.

The map itself is mostly rather interesting and the five groups seem to make broad sense, though a superficial look also reveals a few apparent oddities. ‘Wanting’ here is close to ‘orgasm’. For some years now I’ve wanted to clarify the nature of consciousness; writing this blog has been fun, but dear reader, never  quite like that. I suppose ‘wanting’ is being read as mainly a matter of biological appetites, but the desire and its fulfilment still seem pretty distinct to me, even on that reading.

Generally, a list of methodological worries come to mind, many of which are connected with the notorious difficulties of introspective research. ‘Feelings’ is a rather vaguely inclusive word, to begin with. There are a number of different approaches to classifying the emotions already available, but I have not previously encountered an attempt to go wider for a comprehensive coverage of every kind of feeling. It seems natural to worry that ‘feelings’ in this broad sense might in fact be a heterogeneous grouping, more like several distinct areas bolted together by an accident of language; it certainly feels strange to see thinking and urination, say, presented as members of the same extended family. But why not?

The research seems to rest mainly on responses from a group of more than 1000 subjects, though the paper also mentions drawing on the NeuroSynth meta-analysis database in order to look at neural similarity. The study imported some assumptions by using a list of 100 feelings, and by using four hypothesized basic dimensions – mental experience, bodily sensation, emotion, and controllability. It’s possible that some of the final structure of the map reflects these assumptions to a degree. But it’s legitimate to put forward hypotheses, and that perhaps need not worry us too much so long as the results seem consistent and illuminating. I’m a little less comfortable with the notion here of ‘similarity’; subjects were asked to put feelings closer the more similar they felt them to be, in two dimensions. I suspect that similarity could be read in various ways, and the results might be very vulnerable to priming and contextual effects.

Probably the least palatable aspect, though, is the part of the study relating feelings to body regions. Respondents were asked to say where they felt each of the feelings, with ‘nowhere’, ‘out there’ or ‘in Platonic space’ not being admissible responses. No surprises about where urination was felt, nor, I suppose, about the fact that the cognitive stuff was considered to be all in the head. But the idea that thinking is simply a brain function is philosophically controversial, under attack from, among others, those who say ‘meaning ain’t in the head’, those who champion the extended brain (if you’re counting on your fingers, are you still thinking with just your brain?), those who warn us against the ‘mereological fallacy’, and those like our old friend Riccardo Manzotti, who keeps trying to get us to understand that consciousness is external.

Of course it depends what kind of claim these results might be intended to ground. As a study of ‘folk’ psychology, they would be unobjectionable, but we are bound to suspect that they might be called in support of a reductive theory. The reductive idea that feelings are ultimately nothing more than bodily sensations is a respectable one with a pedigree going back to William James and beyond; but in the context of claims like that a study that simply asks subjects to mark up on a diagram of the body where feelings happen is begging some questions.

Rosehip Neurons of Consciousness

A new type of neuron is a remarkable discovery; finding one in the human cortex makes it particularly interesting, and the further fact that it cannot be found in mouse brains and might well turn out to be uniquely human – that is legitimately amazing. A paper (preprint here) in Nature Neuroscience announces the discovery of ‘rosehipneurons , named for their large, “rosehip”-like axonal boutons.

There has already been some speculation that rosehip neurons might have a special role in distinctive aspects of human cognition, especially human consciousness, but at this stage no-one really has much idea. Rosehip neurons are inhibitory, but inhibiting other neurons is often a key function which could easily play a big role in consciousness. Most of the traffic between the two hemispheres of the human brain is inhibitory, for example, possibly a matter of the right hemisphere, with its broader view, regularly ‘waking up’ the left out of excessively focused activity.

We probably shouldn’t, at any rate, expect an immediate explanatory breakthrough. One comparison which may help to set the context is the case of spindle neurons. First identified in 1929, these are a notable feature of the human cortex and at first appeared to occur only in the great apes – they, or closely analogous neurons, have since been spotted in a few other animals with large brains, such as elephants and dolphins. I believe we still really don’t know why they’re there or what their exact role is, though a good guess seems to be that it might be something to do with making larger brains work efficiently.

Another warning against over-optimism might come from remembering the immense excitement about mirror neurons some years ago. Their response to a given activity both when performed by the subject and when observed being performed by others, seemed to some to hold out a possible key to empathy, theory of mind, and even more. Alas, to date that hope hasn’t come to anything much, and in retrospect it looks as if rather too much significance was read into the discovery.

The distinctive presence of rosehip neurons is definitely a blow to the usefulness of rodents as experimental animals for the exploration of the human brain, and it’s another item to add to the list of things that brain simulators probably ought to be taking into account, if only we could work out how. That touches on what might be the most basic explanatory difficulty here, namely that you cannot work out the significance of a new component in a machine whose workings you don’t really understand to begin with.

There might indeed be a deeper suspicion that a new kind of neuron is simply the wrong kind of thing to explain consciousness. We’ve learnt in recent years that the complexity of a single neuron is very much not to be under-rated; they are certainly more than the simple switching devices they have at times been portrayed as, and they may carry out quite complex processing. But even so, there is surely a limit to how much clarification of phenomenology we can expect a single cell to yield, in the absence of the kind of wider functional theory we still don’t really have.

Yet what better pointer to such a  wider functional theory could we have than an item unique to humans with a role which we can hope to clarify through empirical investigation? Reverse engineering is a tricky skill, but if we can ask ourselves the right questions maybe that longed-for ‘Aha!’ moment is coming closer after all?