Posts tagged ‘consciousness’

TegmarkEarlier this year Tononi’s Integrated Information Theory (IIT) gained a prestigious supporter in Max Tegmark, professor of Physics at MIT. The boost for the theory came not just from Tegmark’s prestige, however; there was also a suggestion that the IIT dovetailed neatly with some deep problems of physics, providing a neat possible solution and the kind of bridge between neuroscience, physics and consciousness that we could hardly have dared to hope for.

Tegmark’s paper presents the idea rather strangely, suggesting that consciousness might be another state of matter like the states of being a gas, a liquid, or solid.  That surely can’t be true in any simple literal sense because those particular states are normally considered to be mutually exclusive: becoming a gas means ceasing to be a liquid. If consciousness were another member of that exclusive set it would mean that becoming conscious involved ceasing to be solid (or liquid, or gas), which is strange indeed. Moreover Tegmark goes on to name the new state ‘perceptronium’ as if it were a new element. He clearly means something slightly different, although the misleading claim perhaps garners him sensational headlines which wouldn’t be available if he were merely saying that consciousness arose from certain kinds of subtle informational organisation, which is closer to what he really means.

A better analogy might be the many different forms carbon can take according to the arrangement of its atoms: graphite, diamond, charcoal, graphene, and so on; it can have quite different physical properties without ceasing to be carbon. Tegmark is drawing on the idea of computronium proposed by Toffoli and Margolus. Computronium is a hypothetical substance whose atoms are arranged in such a way that it consists of many tiny modules capable of performing computations.  There is, I think, a bit of a hierarchy going on here: we start by thinking about the ability of substances to contain information; the ability of a particular atomic matrix to encode binary information is a relatively rigorous and unproblematic idea in information theory. Computronium is a big step up from that: we’re no longer thinking about a material’s ability to encode binary digits, but the far more complex functional property of adequately instantiating a universal Turing machine. There are an awful lot of problems packed into that ‘adequately’.

The leap from information to computation is as nothing, however, compared to the leap apparently required to go from computronium to perceptronium. Perceptronium embodies the property of consciousness, which may not be computational at all and of which there is no agreed definition. To say that raises a few problems is rather an understatement.

Aha! But this is surely where the IIT comes in. If Tononi is right, then there is in fact a hard-edged definition of consciousness available: it’s simply integrated information, and we can even say that the quantity required is Phi. We can detect it and measure it and if we do, perceptronium becomes mathematically tractable and clearly defined. I suppose if we were curmudgeons we might say that this is actually a hit against the IIT: if it makes something as absurd as perceptronium a possibility, there must be something pretty wrong with it. We’re surely not that curmudgeonly, but there is something oddly non-dynamic here. We think of consciousness, surely, as a process, a  function: but it seems we might integrate quite a lot of information and simply have it sit there as perceptronium in crystalline stillness; the theory says it would be conscious, but it wouldn’t do anything.  We could get round that by embracing the possibility of static conscious states, like one frame out of the movie film of experience; but Tegmark, if I understand him right, adds another requirement for consciousness: autonomy, which requires both dynamics and independence; so there has to be active information processing, and it has to be isolated from outside influence, much the way we typically think of computation.

The really exciting part, however,  is the potential linkage with deep cosmological problems – in particular the quantum factorisation problem. This is way beyond my understanding, and the pages of equations Tegmark offers are no help, but the gist appears to be that  quantum mechanics offers us a range of possible universes.  If we want to get ‘physics from scratch’, all we have to work with is, in Tegmark’s words,

two Hermitian matrices, the density matrix p encoding the state of our world and the Hamiltonian H determining its time-evolution…

Please don’t ask me to explain; the point is that the three things don’t pin down a single universe; there are an infinite number of acceptable solutions to the equations. If we want to know why we’ve got the universe we have – and in particular why we’ve got classical physics, more or less, and a world with an object hierarchy – we need something more. Very briefly, I take Tegmark’s suggestion to be that consciousness, with its property of autonomy, tends naturally to pick out versions of the universe in which there are similarly integrated and independent entities – in other words the kind of object-hierarchical world we do in fact see around us. To put it another way and rather baldly, the universe looks like this because it’s the only kind of universe which is compatible with the existence of conscious entities capable of perceiving it.

That’s some pretty neat footwork, although frankly I have to let Tegmark take the steering wheel through the physics and in at least one place I felt a little nervous about his driving. It’s not a key point, but consider this passage:

Indeed, Penrose and others have speculated that gravity is crucial for a proper understanding of quantum mechanics even on small scales relevant to brains and laboratory experiments, and that it causes non-unitary wavefunction collapse. Yet the Occam’s razor approach is clearly the commonly held view that neither relativistic, gravitational nor non-unitary effects are central to understanding consciousness or how conscious observers perceive their immediate surroundings: astronauts appear to still perceive themselves in a semi-classical 3D space even when they are eff ectively in a zero-gravity environment, seemingly independently of relativistic e ffects, Planck-scale spacetime fluctuations, black hole evaporation, cosmic expansion of astronomically distant regions, etc

Yeah… no. It’s not really possible that a professor of physics at MIT thinks that astronauts float around their capsules because the force of gravity is literally absent, is it? That kind of  ‘zero g’ is just an effect of being in orbit. Penrose definitely wasn’t talking about the gravitational effects of the Earth, by the way; he explicitly suggests an imaginary location at the centre of the Earth so that they can be ruled out. But I must surely be misunderstanding.

So far as consciousness is concerned, the appeal of Tegmark’s views will naturally be tied to whether one finds the IIT attractive, though they surely add a bit of weight to that idea. So far as quantum factorisation is concerned, I think he could have his result without the IIT if he wanted: although the IIT makes it particularly neat, it’s more the concept of autonomy he relies on, and that would very likely still be available even if our view of consciousness were ultimately somewhat different. The linkage with cosmological metaphysics is certainly appealing, essentially a sensible version of the Anthropic Principle which Stephen Hawking for one has been prepared to invoke in a much less attractive form

grazianoYes: I feel pretty sure that anyone reading this is indeed conscious. However, the NYT recently ran a short piece from Michael S. A. Graziano which apparently questioned it. A fuller account of his thinking is in this paper from 2011; the same ideas were developed at greater length in his book Consciousness and the Social Brain

I think the startling headline on the NYT piece misrepresents Graziano somewhat. The core of his theory is that awareness is in some sense a delusion, the reality of which is simple attention. We have ways of recognising the attention of other organisms, and what it is fixed on (the practical value of that skill in environments where human beings may be either hunters or hunted is obvious): awareness is just our garbled version of attention. he offers the analogy of colour. The reality out there is different wavelengths of light: colour, our version of that, is a slightly messed-up, neatened, artificial version which is nevertheless very vivid to us in spite of being artificial to a remarkably large extent.

I don’t think Graziano is even denying that awareness exists, in some sense: as a phenomenon of some kind it surely does: what he means is more that it isn’t veridical: what it tells us about itself, and what it tells us about attention, isn’t really true. As he acknowledges in the paper, there are labelling issues here, and I believe it would be possible to agree with the substance of what he says while recasting it in terms that look superficially much more conventional.

Another labelling issue may lurk around the concept of attention. On some accounts, it actually presupposes consciousness: to direct one’s attention towards something is precisely to bring it to the centre of your consciousness. That clearly isn’t what Graziano means: he has in mind a much more basic meaning. Attention for him is something simple like having your sensory organs locked on to a particular target. This needs to be clear and unambiguous, because otherwise we can immediately see potential problems over having to concede that cameras or other simple machines are capable of attention; but I’m happy to concede that we could probably put together some kind of criterion, perhaps neurological, that would fit the bill well enough and give Graziano the unproblematic materialist conception of attention that he needs.

All that looks reasonably OK as applied to other people, but Graziano wants the same system to supply our own mistaken impression of awareness. Just as we track the attention of others with the false surrogate of awareness, we pick up our own attentive states and make the same kind of mistake. This seems odd: when I sense my own awareness of something, it doesn’t feel like a deduction I’ve made from objective evidence about my own behaviour: I just sense it.  I think Graziano actually wants it to be like that for other people too. He isn’t talking about rational, Sherlock Holmes style reasoning about the awareness of other people, he has in mind something like a deep, old, lizard-brain kind of thing; like the sense of somebody there that makes the hairs rise on the back of the neck  and your eyes quickly saccade towards the presumed person.

That is quite a useful insight, because what Graziano is concerned to deny is the reality of subjective experience, of qualia, in a word. To do so he needs to be able to explain why awareness seems so special when the reality is nothing more than information processing. I think this remains a weak spot in the theory, but the idea that it comes from a very basic system whose whole function is to generate a feeling of ‘something there’ helps quite a bit, and is at least moderately compatible with my own intuitions and introspections.What Graziano really relies on is the suggestion that awareness is a second-order matter: it’s a cognitive state about other cognitive states, something we attribute to ourselves and not, as it seems to be, directly about the real world. It just happens to be a somewhat mistaken cognitive state.

That still leaves us in some difficulty over the difference between me and other people. If my sense of my own awareness is generated in exactly the same way as my sense of the awareness of others, it ought to seem equally distant  – but it doesn’t, it seems markedly more present and far less deniable.

More fundamentally, I still don’t really see why my attention should be misperceived. In the case of colours, the misrepresentation of reality comes from two sources, I think. One is the inadequacy of our eyes; our brain has to make do with very limited data on colour (and on distance and other factors) and so has to do things like hypothesising yellow light where it should be recognising both red and green, for example. Second, the brain wants to make it simple for us and so tries desperately to ensure that the same objects always look the same colour, although the wavelengths being received actually vary according to conditions. I find it hard to see what comparable difficulties affect our perception of attention. Why doesn’t it just seem like attention? Graziano’s view of it as a second-order matter explains how it can be wrong about attention, but not really why.

So I think the theory is less radical than it seems, and doesn’t quite nail the matter on some important points: but it does make certain kinds of sense and at the very least helps keep us roused from our dogmatic slumbers. Here’s a wild thought inspired (but certainly not endorsed) by Graziano. Suppose our sense of qualia really does come from a kind of primitive attention-detecting module. It detects our own attention and supplies that qualic feel, but since it also (in fact primarily) detects other people’s attention, should it not also provide a bit of a qualic feel for other people too? Normally when we think of our beliefs about other people, we remain in the explicit, higher realms of cognition: but what if we stay at a sort of visceral level, what if we stick with that hair-on-the back-of the-neck sensation? Could it be that now and then we get a whiff of other people’s qualia? Surely too heterodox an idea to contemplate…

gapWe’re often told that when facing philosophical problems, we should try to ‘carve them at the joints’. The biggest joint on offer in the case of consciousness has seemed to be the ‘explanatory gap’ between the physical activity of neurons and the subjective experience of consciousness. Now, in the latest JCS, Reggia, Monner, and Sylvester suggest that there is another gap, and one where our attention should rightly be focussed.

They suggest that while the simulation of certain cognitive processes has proceeded quite well, the project of actually instantiating consciousness computationally has essentially got nowhere.  That project, as they say, is affected by a variety of problems about defining and recognising success. But the real problem lies in an unrecognised issue: the computational explanatory gap. Whereas the explanatory gap we’re used to is between mind and brain, the computational gap is between high-level computational algorithms and low-level neuronal activity. At the high level, working top-down, we’ve done relatively well in elucidating how certain kinds of problem-solving, goal-directed kinds of computation work, and been able to simulate them relatively effectively.  At the neuronal, bottom-up level we’ve been able to explain certain kinds of pattern-recognition and routine learning systems. The two different kinds of processing have complementary strengths and weaknesses, but at the moment we have no clear idea of how one is built out of the other. This is the computational explanatory gap.

In philosophy, the authors plausibly claim, this important gap has been overlooked because in philosophical terms these are all ‘easy problem’ matters, and so tend to be dismissed as essentially similar matters of detail. In psychology, by contrast, the gap is salient but not clearly recognised as such: the lower-level processes correspond well to those identified as sub-conscious, while the higher-level ones match up with the reportable processes generally identified as conscious.

If Reggia, Monner and Sylvester are right, the well-established quest for the neural correlates of consciousness has been all very well, but what we really need is to bridge the gap by finding the computational correlates of consciousness. As a project, bridging this gap looks relatively promising, because it is all computational. We do not need to address any spooky phenomenology, we do not need to wonder how to express ‘what it is like’, or deal with anything ineffable; we just need to find the read-across between neural networking and the high-level algorithms which we can sort of see in operation. That may not be easy, but compared to the Hard Problem it sounds quite tractable. If solved, it will deliver a coherent account right from neural activity through to high-level decision making.

Of course, that leaves us with the Hard Problem unsolved, but the authors are optimistic that success might still banish the problem. They draw an analogy with artificial life: once it seemed obvious that there was a mysterious quality of being alive, and it was unclear how simple chemistry and physics could ever account for it. That problem of life has never been solved in terms, but as our understanding of the immensely subtle chemistry of living things has improved, it has gradually come to seem les and less obvious that it is really a problem. In a similar way the authors hope that if the computational explanatory gap can be bridged, so that we gradually develop a full account of cognitive processes from the ground-level firing of neurons up to high-level conscious problem-solving, the Hard Problem will gradually cease to seem like something we need to worry about.

That is optimistic, but not unreasonably so, and I think the new perspective offered is a very interesting and plausible one. I’m convinced that the gap exists and that it needs to be bridged: but I’m less sure that it can easily be done.  It might be that Reggia, Monner, and Sylvester are affected in a different way by the same kind of outlook they criticise in philosophers: these are all computational problems, so they’re all tractable. I’m not sure how we can best address the gap, and I suspect it’s there not just because people have failed to recognise it, but because it is also genuinely difficult to deal with.

For one thing, the authors tend to assume the problem is computational. It’s not clear that computation is of the essence here. The low-level processes at a neuronal level don’t look to be based on running any algorithm – that’s part of the nature of the gap. High-level processes may be capable of simulation algorithmically, but that doesn’t mean that’s the way the brain actually does it. Take the example of catching a ball – how do we get to the right place to intercept a ball flying through the air?  One way to do this would be some complex calculations about perspective and vectors: the brain could abstract the data, do the sums, and send back the instructions that result. We could simulate that process in a computer quite well. But we know – I think – that that isn’t the way it’s actually done: the brain uses a simpler and quicker process which never involves abstract calculation, but is based on straight matching of two inputs; a process which incidentally corresponds to a sub-optimal algorithm, but one that is good enough in practice. We just run forward if the elevation of the ball is reducing and back if it’s increasing. Fielders are incapable of predicting where a ball is going, but they can run towards the spot in such a way as to be there when the ball arrives.  It might be that all the ‘higher-level’ processes are like this, and that an attempt to match up with ideally-modelled algorithms is therefore categorically off-track.

Even if those doubts are right, however, it doesn’t mean that the proposed re-framing of the investigation is wrong or unhelpful, and in fact I’m inclined to think it’s a very useful new perspective.

 

pixelated eyeYou’ve heard of splitting the atom: W. Alex Escobar wants to split the quale. His recent paper (short article here) proposes that in order to understand subjective experience we may need to break it down into millions of tiny units of experience.  He proposes a neurological model which to my naive eyes seems reasonable: the extraordinary part is really the phenomenology.

Like a lot of qualia theorists Escobar seems to have based his view squarely on visual experience, and the idea of micro-qualia is perhaps inspired by the idea of pixels in digitised images, or other analytical image-handling techniques.  Why would the idea help explain qualia?

I don’t think Escobar explains this very directly, at least from a philosophical point of view, but you can see why the idea might appeal to some people. Panexperientialists, for example, take the view that there are tiny bits of experience everywhere, so the idea that our minds assemble complex experiences out of micro-qualia might be quite congenial to them.  As we know, Christof Koch says that consciousness arises from the integration of information, so perhaps he would see Escobar’s theory as offering a potentially reasonable phenomenal view of the same process.

Unfortunately Escobar has taken a wrong turning, as others have done before, and isn’t really talking about ineffable qualia at all: instead, we might say he is merely effing the effable.

Ineffability, the quality of being inexpressible, is a defining characteristic of qualia as canonically understood in the philosophical literature. I cannot express to you what redness is like to me; if I could, you would be able to tell whether it was the same as your experience. If qualia could be expressed, my zombie twin  (who has none) would presumably become aware of their absence; when asked what it was like to see red, he would look puzzled and admit he didn’t really know, whereas ex hypothesi he gives the same fluent and lucidly illuminating answers that I do – in spite of not having the things we’re both talking about.

Qualia, in fact, have no causal effects and cannot be part of the scientific story. That doesn’t mean Escobar’s science is wrong or uninteresting, just that what he’s calling qualia aren’t really the philosophically slippery items of experience we keep chasing in vain in our quest for consciousness.

Alright, but setting that aside, is it possible that real qualia could be made up of many micro-qualia? No, it absolutely isn’t! In physics, a table can seem to be a single thing but actually be millions of molecules.  Similarly, what looks like a flat expanse of uniform colour on a screen may actually be thousands of pixels. But qualia are units of experience; what they seem like is what they are. They don’t seem like a cloud of micro-qualia, and so they aren’t. Now there could be some neuronal or psychological story going on at a lower level which did involve micro units; but that wouldn’t make qualia themselves splittable. What they are like is all there is to them; they can’t have a hidden nature.

Alas, Escobar could not have noticed that, because he was too busy effing the Effable.

intuitronMark O’Brien gives a good statement of the computationalist case here; clear, brief, and commendably mild and dispassionate. My impression is that enthusiasm for computationalism – approximately, the belief that human thought is essentially computational in nature – is not what it was. It’s not that computationalists lost the argument, it’s more that the robots failed to come through. What AI research delivered has so far been, in this respect, much less than the optimists had hoped.

Anyway O’Brien’s case rests on two assumptions:

  • Naturalism is true.
  • The laws of physics are computable, that is, any physical process can be simulated to any desired degree of precision by a computer.

It’s immediately clear where he’s going. T0 represent it crudely, the intuition here is that naturalism means the world ultimately consists of  physical processes, any physical process can run on a computer, ergo anything in the world can run on a computer, ergo it must be possible to run consciousness on a computer.

There’s an awful lot packed into those two assumptions. O’Brien tackles one issue with the idea of simulation: namely that simulating something isn’t doing it for real. A simulated rainstorm doesn’t make us wet. His answer is that simulation doesn’t produce physical realities, but it does seem to work for abstract things. I think this is basically right. If we simulate a flight to Paris, we don’t end up there; but the route calculated by the program is the actual route; it makes no sense to say it’s only a simulated route, because it’s actually identical with the one we should use if we really went to Paris. So the power of simulation is greater for informational entities than for physical ones, and it’s not unreasonable to suggest that consciousness seems more like a matter of information than of material stuff.

There’s a deeper point, though. To simulate is not to reproduce: a simulation is the reproduction of the relevant aspects of the thing simulated. It’s implied that some features of the thing simulated are left out, ones that don’t matter. That’s why we get the different results for our Parisian exercise: the simulation necessarily leaves our actual physical locations untouched; those are irrelevant when it comes to describing the route, but essential when it comes to actually visiting Paris.

The problem is we don’t know which properties are relevant to consciousness, and to assume they are the kind of thing handled by computation simply begs the question. It can’t be assumed without an argument that physical properties are irrelevant here: John Searle and Roger Penrose in different ways both assert that they are of the essence. Even if consciousness doesn’t rely quite so brutally as that on the physical nature of the brain, we need to start with a knowledge of how consciousness works. Otherwise, we can’t tell whether we’ve got the right properties in our simulation –  even if they are in principle computational.

I don’t myself think Searle or Penrose are right: but I think it’s quite likely that the causal relationships in cognitive processes are the kind of essential thing a simulation would have to incorporate. This is a serious problem because there are reasons to think computer simulations never reproduce the causal relationships intact. In my brain event A causes event B and that’s all there is to it: in a computer, there’s always a script involved. At its worst what we get is a program that holds up flag A to represent event A and then flag B to represent event B: but the causality is mediated through the program. It seems to me this might well be a real issue.

O’Brien tackles another of Searle’s arguments: that you can’t get semantics from syntax: ie, you can’t deal with meanings just by manipulating digits. O’Brien’s strategy here is to assume a robot that behaves pretty much the way I do: does it have beliefs? It says it does, and it behaves as if it did. Perhaps we’re not willing to concede that those are real beliefs: OK, let’s call them beliefs*. On examination it turns out that the differences between beliefs and beliefs* are nugatory: so on gorunds of parsimony if nothing else we should assume they are the same.

The snag here is that there are no robots that behave the way I do.  We’ve had sixty years of failure since Turing: you can’t just have it as an assumption that our robot pals are self-evidently achievable (alas).  We know that human beings, when they do translation for example, extract meanings and then put the meanings into other words, whereas the most successful translation programs avoid meanings altogether and simply swap text strings for text strings according to a kind of mighty look-up table.

That kind of strategy won’t work when dealing with the formless complexity of the real world: you run into the analogues of the Frame Problem or you just don’t get really started. It doesn’t even work that well for language: we know now that human understanding of language relies on pragmatic Gricean implicatures, and no-one can formalise those.

Finally O’Brien turns to qualia, and here I agree with him on the broad picture. He describes some of the severe difficulties around qualia and says, rightly I think, that in the end it comes down to competing intuitions.  All the arguments for qualia are essentially thought experiments: if we want, we can just say ‘no’ to all of them (as Dennett and the Churchlands, for example, do). O’Brien makes a kind of zombie argument: my zombie twin, who lacks qualia but resembles me in all other respects, would claim to have qualia and would talk about them just the way we do.  So the explanation for talk about qualia is not qualia themselves: given that, there’s no reason to think we ourselves have them.

Up to a point: but we get the conclusion that my zombie twin talks about qualia purely ex hypothesi: it’s just specified. It’s not an explanation, and I think that’s what we really need to be in a position to dismiss the strong introspective sense most people have that qualia exist. If we could actually explain what makes the Twin talk about qualia, we’d be in a much better position.

So I mostly disagree, but I salute O’Brien’s exposition, which is really helpful.

smellingAn intriguing paper from Benjamin D. Young claims that we can have phenomenal experiences of which we are unaware – although experiences of which we are aware always have phenomenal content. The paper is about smell, though I don’t really see why similar considerations shouldn’t apply to other senses.

At first sight the idea of phenomenal experience of which we are unaware seems like a contradiction in terms. Phenomenal experience is the subjective aspect of consciousness, isn’t it? How could an aspect of consciousness exist without consciousness itself? Young rightly says that it is well established that things we only register subconsciously can affect our behaviour – but that can’t include the sort of experience which for some people is the real essence of consciousness, can it?

The only way I can imagine subjectivity going on in my head without me experiencing it is if someone else were experiencing it – not a matter of me experiencing things subconsciously, but of my subconscious being a real separate entity, or perhaps of it all going on in the mind of alternate personality of the kind that seems to occur is Dissociative Identity Disorder (Multiple Personality, as it used to be called).

On further reflection, I don’t think that’s the kind of thing Young meant at all: I think instead he is drawing a distinction between explicit and inexplicit awareness. So his point is that I can experience qualia without having any accompanying conscious thought about those qualia or the experience.

That’s true and an important point. One reason qualia seem so slippery, I think, is that discussion is always in second order terms: we exchange reports of qualia. But because the things themselves are irredeemably first order they have a way of disappearing from the discussion, leaving us talking about their effable accompaniments.

Ironically, something like that may have happened in Young’s paper, as he goes on to discuss experiments which allegedly shed light on subjective experience. Smell is a complex phenomenon of course; compared with the neat structure of colours the rambling and apparently inexhaustible structure of smell space is daunting;y hard to grasp. However, smell conveniently has valence in a way that colours don’t: some smells are nice and some are nasty. Humans apparently vary their sniff rate partly in response to a smell’s valence and Young thinks that this provides an objective, measurable way into the subjectivity of the experience.

Beyond that he goes on to consider mating choice: it seems human beings, like other mammals, choose their mates partly on the basis of smell. I imagine this might be controversial to some, and some of the research Young quotes sounds amusingly naive. In answer to a questionnaire, female subjects rated body odour as an important factor in selecting a sexual partner; well yes, if a guy smells you’re maybe not going to date him, huh?

I haven’t read the study which was doubtless on a much more sophisticated level, and Young cites a whole wealth of other interesting papers. The problem is that while this is all fascinating psychologically, none of it can properly bear on the philosophical issue because qualia, the ultimate bearers of subjectivity, are acausal and cannot affect our behaviour. This is shown clearly by the zombie twin argument: my zombie twin has no qualia but his behaviour is ex hypothesi the same as mine.

Still, the use of valence as a way in is interesting. The normal philosophical argument is that we have no way of telling whether my subjective red is your subjective green: but it’s hard to argue that m subjective nasty is your subjective nice (unless we also hypothesise that you seek out nasty experiences and avoid nice ones?).

sorates and branestawmQuentin Ruyant has written a thoughtful piece about quantum mechanics and philosophy of mind: in a nutshell he argues both that quantum theory may be relevant to the explanation of consciousness and that consciousness may be relevant to the interpretation of quantum theory.

Is quantum theory relevant to consciousness? Well. of course some people have said so, notably Sir Roger Penrose and Stuart Hameroff.  I think Ruyant is right, though, that the majority of philosophers and probably the majority of physicists dismiss the idea that quantum theory might be needed to explain consciousness. People often suggest that the combination of the two only appeals because both are hard to explain: ‘here’s one mystery and here’s another: maybe one explains the other’. Besides, people say, the brain is far too big and hot and messy for anything other than classical physics to be required

In making the case for the relevance of quantum theory, Ruyant relies on the Hard Problem.  His position is that the Hard Problem is not biological but a matter of physics, whereas the Easy Problem, to do with all the scientifically tractable aspects of consciousness, can be dealt with by biology or psychology.

Actually, turning aside from the main thread of Ruyant’s argument, there are some reasons to suggest that quantum physics is relevant to the Easy Problem. Penrose’s case, in fact, seems to suggest just that: in his view consciousness is demonstrably non-computable and some kind of novel quantum mechanics is his favoured candidate to fill the gap. Penrose’s examples, things like solving mathematical problems, look like ‘Easy’ Problem matters to me.

Although I don’t think anyone (including me) advocates the idea, it also seems possible to argue that the ‘spooky action at a distance’ associated with quantum entanglement might conceivably have something to tell us about intentionality and its remarkable power to address things that are remote and not directly connected with us.

Anyway, Ruyant is mainly concerned with the Hard Problem, and his argument is that metaphysics and physics are closely related. Topics like the essential nature of physical things straddle the borderline between the two subjects, and it is not at all implausible therefore that the deep physics of quantum mechanics might shed light on the deep metaphysics of phenomenal experience. It seems to me a weakish line of argument, possibly tinged with a bit of prejudice: some physicists are inclined to feel that while their subject deals with the great fundamentals, biology deals only with the chance details of life; sort of a more intellectual kind of butterfly collecting.  That kind of thinking is not really well founded, and it seems particularly odd to think that biology is irrelevant when considering a phenomenon that, so far as we know, appears only in animals and is definitely linked very strongly with the operation of the brain. John Searle for one argues that ‘Hard Problem’ consciousness arises from natural biological properties of brain tissue. We don’t yet know what those properties are, but in his view it’s absurd to think that the job of nerves could equally well be performed by beer cans and string. Ruth Millikan, somewhat differently, has argued that consciousness is purely biological in nature, arising from and defined by evolutionary needs.

I think the truth is that it’s difficult to get anywhere at this meta-theoretical level:  we don’t really decide what kind of theory is most likely to be right and then concentrate on that area; we decide what the true theory most likely is and then root for the kind of theory it happens to be. That, to a great extent, is why quantum theories are not very popular: no-one has come up with a particular one that is cogent and appealing.  It seems to me that Ruyant likes the idea of physics-based theories because he favours panpsychism, or panphenomenalism, and so is inclined to think that the essential nature of matter is likely to be the right place to look for a theory.

To be honest, though, I doubt whether any kind of science can touch the Hard Problem.  It’s about entities that have no causal properties and are ineffable: how could empirical science ever deal with that? It might well be that a scientist will eventually give us the answer, but if so it won’t be by doing science, because neither classical nor quantum physics can really touch the inexpressible.

Actually, though there is a long shot.  If Colin McGinn is partly on the right track, it may be that consciousness seems mysterious to us simply because we’re not looking at it the right way: our minds won’t conceptualise it correctly. Now the same could be true of quantum theory. We struggle with the interpretation of quantum mechanics, but what if we could reorient our brains so that it simply seemed natural, and we groped instead for an acceptable ‘interpretation’ of spooky classical physics? If we could make such a transformation in our mental orientation, then perhaps consciousness would make sense too? It’s possible, but we’re back to banging two mysteries together in the hope that some spark will be generated.

Ruyant’s general case, that metaphysicians should be informed by our best physics is hard to argue with. At the moment few philosophers really engage with the physics and few physicists really grasp the philosophy. Why do philosophers avoid quantum physics? Partly, no doubt, just because it’s difficult, and relies on mathematics which few philosophers can handle. Partly also, I think there’s an unspoken fear that in learning about quantum physics your intuitions will be trained into accepting a particular weltanschauung that might not be helpful. Connected with that is the fear that quantum physics isn’t really finished or definitive. Where would I be if I came up with a metaphysical system that perfectly supported quantum theory and then a few years later it turns out that I should have been thinking in terms of string theory? Metaphysicians cross their fingers and hope they can deal with the key issues at a level of generality that means they won’t be rudely contradicted by an unexpected advance in physics a few years later.

I suppose what we really need is someone who can come up with a really good specific theory that shows the value of metaphysics informed by physics, but few people are qualified to produce one. I must say that Ruyant seems to be an exception, with an excellent grasp of the theories on both sides of the divide. Perhaps he has a theory of consciousness in his back pocket…?

claustrumDoctors at George Washington found by chance recently that stimulating a patient’s claustrum served to disrupt consciousness temporarily (abstract). The patient was being treated for epilepsy, and during this kind of surgery it is normal to use an electrode to stimulate areas of the brain in the target area before surgery to determine their role and help ensure the least possible damage is done to important functions. The claustrum is a sheet-like structure which seems to be well connected to many parts of the brain; Crick and Koch suggested it might be ‘the conductor of the orchestra’ of consciousness.

New Scientist reported this as the discovery of the ‘on/off’ switch for consciousness; but that really doesn’t seem to be the claustrum’s function: there’s no reason at the moment to suppose it is involved in falling asleep, or anaesthesia, or other kinds of unconsciousness, The on/off idea seems more like a relatively desperate attempt to explain the discovery in layman’s terms, reminiscent of the all-purpose generic tabloid newspaper technology report in Michael Frayn’s The Tin Men:

British scientists have developed a “magic box”, it was learned last night. The new wonder device was tested behind locked doors after years of research. Results were said to have exceeded expectations… …The device is switched on and off with a switch which works on the same principle as an ordinary domestic light switch…

Actually, one of the most interesting things about the finding is that the state the patient entered did not resemble sleep or any of those other states; she did not collapse or close her eyes, but instantly stopped reading and became unresponsive – although if she had been asked to perform a repetitive task before stimulation started, she would continue for a few seconds before tailing off. On some occasions she uttered a few incoherent syllables unprompted. This does sound more novel and potentially more interesting than a mere on/off switch. She was unable to report what the experience was like as she had no memory of it afterwards – that squares with the idea that consciousness was entirely absent during stimulation, though it’s fair to note that part of her hippocampus, which has an important role in memory formation, had already been removed.

Could Crick and Koch now be vindicated? It seems likely in part: the claustrum seems at least to have some important role – but it’s not absolutely clear that it is a co-ordinating one. One of the long-running problems for consciousness has been the binding problem: how the different sensory inputs, processed and delivered at different speeds, somehow come together into a smoothly co-ordinated experience. It could be that the claustrum helps with this, though some further explanation would be needed. As a long shot, it might even be that the claustrum is part of the ‘Global Workspace’ of the mind hypothesised by Bernard Baars, an idea that is still regularly invoked and quoted.

But we must be cautious. All we really know is that stimulating the claustrum disrupted consciousness. That does not mean consciousness happens in the claustrum. If you blow up a major road junction near a car factory, production may cease, but it doesn’t mean that the junction was where the cars were manufactured. Looking at it sceptically we might note that since the claustrum is well connected it might provide an effective way of zapping several important areas at once, and it might be the function of one or more of these other areas that is essential to sustaining consciousness.

However, it is surely noteworthy that a new way of being unconscious should have been discovered. It seems an unprecedentedly pure way, with a very narrow focus on high level activity, and that does suggest that we’re close to key functions. It is ethically impossible to put electrodes in anyone’s claustrum for mere research reasons, so the study cannot be directly replicated or followed up; but perhaps the advance of technology will provide another way.

bot Botprize is a version of the Turing Test for in-game AIs: they don’t have to talk, just run around playing Unreal Tournament (a first-person shooter game) in a way that convinces other players that they are human. In the current version players use a gun to tag their opponents as bots or humans; the bots, of course, do the same.

The contest initially ran from 2008 up to 2012; in the last year, two of the bots exceeded the 50% benchmark of humanness. The absence of a 2013 contest might have suggested that that had wrapped things up for good: but now the 2014 contest is under way: it’s not too late to enter if you can get your bot sorted by 12 May. This time there will be two methods of judging; one called ‘first person’ (rather confusingly – that sounds as if participants will ask themselves: am I a bot?) is the usual in-game judging; the other (third person) will be a ‘crowd-sourced’ judgement based on people viewing selected videos after the event.

How does such a contest compare with the original Turing Test, a version of which is run every year as the Loebner Prize? The removal of any need to talk seems to make the test easier. Judges cannot use questions to test the bots’ memory (at least not in any detail), general knowledge, or ability to carry the thread of a conversation and follow unpredictable linkages of the kind human beings are so good at. They cannot set traps for the bots by making quirky demands (‘please reverse the order of the letters in each word when you respond’) or looking for a sense of humour.

In practice a significant part of the challenge is simply making a bot that plays the game at an approximately human level. This means the bot must never get irretrievably stuck in a corner or attempt to walk through walls; but also, it must not be too good – not a perfect shot that never misses and is inhumanly quick on the draw, for example. This kind of thing is really not different from the challenges faced by every game designer, and indeed the original bots supplied with the game don’t perform all that badly as human imitators, though they’re not generally as convincing as the contestants.

The way to win is apparently to build in typical or even exaggerated human traits. One example is that when a human player is shot at, they tend to go after the player that attacked them, even when a cool appraisal of the circumstances suggests that they’d do better to let it go. It’s interesting to reflect that if humans reliably seek revenge in this way, that tendency probably had survival value in the real world when the human brain was evolving; there must be important respects in which the game theory of the real world diverges from that of the game.

Because Botprize is in some respects less demanding than the original Turing Test, the conviction it delivers is less; the 2012 wins did not really make us believe that the relevant bots had human thinking ability, still less that they were conscious. In that respect a proper conversation carries more weight. The best chat-bots in the Loebner, however, are not at all convincing either, partly for a different reason – we know that no attempt has been made to endow them with real understanding or real thought; they are just machines designed to pass the test by faking thoughtful responses.

Ironically some of the less successful Botprize entrants have been more ambitious. In particular, Neurobot, created by Zafeiros Fountas as an MSc project, used a spiking neural network with a Global Workspace architecture; while not remotely on the scale of a human brain, this is in outline a plausible design for human-style cognition; indeed, one of the best we’ve got (which may not be saying all that much, of course). The Global Workspace idea, originated by Bernard Baars, situates consciousness as a general purpose space where inputs from different modules can be brought together and handled effectively. Although I have my reservations about that concept, it could at least reasonably be claimed that Neurobot’s functional states were somewhere on a spectrum which ultimately includes proper consciousness (interestingly, they would presumably be cognitive states of a kind which have never existed in nature, far simpler than those of most animals yet in some respects more like states of a human brain).

The 2012 winners by contraast, like the most successful Loebner chat-bots, relied on replaying recorded sequences of real human behaviour. Alas, this seems in practice to be the Achilles heel of Turing-style tests; canned responses just work too well.

measureThere were reports recently of a study which tested different methods for telling whether a paralysed patient retained some consciousness. In essence, PET scans seemed to be the best, better than fMRI or traditional, less technically advanced tests. PET scans could also pick out some patients who were not conscious now, but had a good chance of returning to consciousness later; though it has to be said a 74% success rate is not that comforting when it comes to questions of life and death.

In recent years doctors have attempted to diagnose a persistent vegetative state in unresponsive patients, a state i which a patient would remain alive indefinitely (with life support) but never resume consciousness; there seems to be room for doubt, though about whether this is really a distinct clinical syndrome or just a label for the doctor’s best guess.

All medical methods use proxies, of course, whether they are behavioural or physiological; none of them aspire to measure consciousness directly. In some ways it may be best that this is so, because we do want to know what the longer term prognosis is, and for that a method which measures, say, the remaining blood supply in critical areas of the brain may be more useful than one which simply tells you whether the patient is conscious now. Although physiological tests are invaluable where a patient is incapable of responding physically, the real clincher for consciousness is always behavioural; communicative behaviour is especially convincing. The Turing test, it turns out, works for humans as well as robots.

Could there ever be a method by which we measure consciousness directly? Well, if Tononi’s theory of Phi is correct, then the consciousness meter he has proposed would arguably do that. On his view consciousness is generated by integrated information, and we could test how integratedly the brain was performing by measuring the effect of pulses sent through it. Another candidate mught be possible if we are convinced by the EM theories of Johnjoe McFadden; since on his view consciousness is a kind of electromagnetic field, it ought to be possible to detect it directly, although given the small scales involved it might not be easy.

How do we know whether any of these tests is working? As I said, the gold standard is always behavioural: if someone can talk to you, then there’s no longer any reasonable doubt; so if our tests pick out just those people who are able to communicate, we take it that they are working correctly. There is a snag here, though: behavioural tests can only measure one kind of consciousness: roughly what Ned Block called access consciousness, the kind which has to do with making decisions and governing behaviour. But it is widely believed that there is another kind, phenomenal consciousness, actual experience. Some people consider this the more important of the two (others, it must be added, dismiss it as a fantasy). Phenomenal consciousness cannot be measured scientifically, because it has no causal effects; it certainly cannot be measured behaviourally, because as we know from the famous thought-experiment about  philosophical ‘zombies’ who lack it, it has no effect on behaviour.

If someone lost their phenomenal consciousness and became such a zombie, would it matter? On one view their life would no longer be worth living (perhaps it would be a little like having an unconscious version of Cotard’s syndrome), but that would certainly not be their view, because they would express exactly the same view as they would if they still had full consciousness. They would be just as able to sue for their rights as a normal person, and if one asked whether there was still ‘someone in there’ there would be no real reason to doubt it. In the end, although the question is valid, it is a waste of time to worry about it because for all we know anyone could be a zombie anyway, whether they have suffered a period of coma or not.

We don’t need to go so far to have some doubts about tests that rely on communication, though. Is it conceivable that I could remain conscious but lose all my ability to communicate, perhaps even my ability to formulate explicitly articulated thoughts in my own mind?  I can’t see anything absurd about that possibility: indeed it resembles the state I imagine some animals live their whole lives in. The ability to talk is very important, but surely it is not constitutive of my personal existence?

If that’s so then we do have a problem, in principle at least, because if all of our tests are ultimately validated against behavioural criteria, they might be systematically missing conscious states which ought not to be overlooked.