Posts tagged ‘qualia’

Maybe there’s a better strategy on consciousness? An early draft paper by David Chalmers suggests we turn from the Hard Problem (explaining why there is ‘something it is like’ to experience things) and address the Meta-Problem of why people think there is a Hard Problem; why we find the explanation of phenomenal experience problematic. While paper does make clear broadly what Chalmers’ own views are, it primarily seeks to map the territory, and does so in a way that is very useful.

Why would we decide to focus on the Meta-Problem? For sceptics, who don’t believe in phenomenal experience or think that the apparent problems about it stem from mistakes and delusions, it’s a natural piece of tidying up. In fact, for sceptics why people think there’s a problem may well be the only thing that really needs explaining or is capable of explanation. But Chalmers is not a sceptic. Although he acknowledges the merits of the broad sceptical case about phenomenal consciousness which Keith Frankish has recently championed under the label of illusionism, he believes it is indeed real and problematic. He believes, however, that illuminating the Meta-Problem through a programme of thoughtful and empirical research might well help solve the Hard Problem itself, and is a matter of interest well beyond sceptical circles.

To put my cards on the table, I think he is over-optimistic, and seems to take too much comfort from the fact that there have to be physical and functional explanations for everything. It follows from that that there must indeed at least be physical and functional explanations for our reports of experience, our reports of the problem, and our dispositions to speak of phenomenal experience, qualia, etc. But it does not follow that there must be adequate and satisfying explanations.

Certainly physical and functional explanations alone would not be good enough to banish our worries about phenomenal experience. They would not make the itch go away. In fact, I would argue that they are not even adequate for issues to do with the ‘Easy Problem’, roughly the question of how consciousness allows us to produce intelligent and well-directed behaviour. We usually look for higher-level explanations even there; notably explanations with an element of teleology – ones that tell us what things are for or what they are supposed to do. Such explanations can normally be cashed out safely in non-teleological terms, such as strictly-worded evolutionary accounts; but that does not mean they are dispensable or not needed in order for us to understand properly.

How much more challenging things are when we come to Hard Problem issues, where a claim that they lie beyond physics is of the essence. Chalmer’s optimism is encapsulated in a sentence when he says…

Presumably there is at least a very close tie between the mechanisms that generate phenomenal reports and consciousness itself.

There’s your problem. Illusionists can be content with explanations that never touch on phenomenal consciousness because they don’t think it exists, but no explanation that does not connect with it will satisfy qualophiles. But how can you connect with a phenomenon explanatorily without diagnosing its nature? It really seems that for believers, we have to solve the Hard Problem first (or at least, simultaneously) because believers are constrained to say that the appearance of a problem arises from a real problem.

Logically, that is not quite the case; we could say that our dispositions to talk about phenomenal experience arise from merely material causes, but just happen to be truthful about a second world of phenomenal experience, or are truthful in light of a Leibnizian pre-established harmony. Some qualophiles are similarly prepared to say that their utterances about qualia are not caused by qualia, so that position might seem appealing in some quarters. To me the harmonised second world seems hopelessly redundant, and that is why something like illusionism is, at the end of the day, the only game in town.

I should make it clear that Chalmers by no means neglects the question of what sort of explanation will do; in fact he provides a rich and characteristically thorough discussion. It’s more that in my opinion, he just doesn’t know when he’s beaten, which to be fair may be an outlook essential to the conduct of philosophy.

I say that something like illusionism seems to be the only game in town, though I don’t quite call myself an illusionist. There’s a presentational difficulty for me because I think the reality of experience, in an appropriate sense, is the nub of the matter. But you could situate my view as the form of illusionism which says the appearance of ineffable phenomenal experience arises from the mistaken assumption that particular real experiences should be within the explanatory scope of general physical theory.

I won’t attempt to summarise the whole of Chalmers’ discussion, which is detailed and illuminating; although I think he is doomed to disappointment, the project he proposes might well yield good new insights; it’s often been the case that false philosophical positions were more fecund than true ones.

You may already have seen Jochen’s essay Four Verses from the Daodejingan entry in this year’s FQXi competition. It’s a thought-provoking piece, so here are a few of the ones it provoked in me. In general I think it features a mix of alarming and sound reasoning which leads to a true yet perplexing conclusion.

In brief Jochen suggests that we apprehend the world only through models; in fact our minds deal only with these models. Modelling and computation are in essence the same. However, the connection between model and world is non-computable (or we face an infinite regress). The connection is therefore opaque to our minds and inexpressible. Why not, then, identify it with that other inexpressible element of cognition, qualia? So qualia turn out to be the things that incomprehensibly link our mental models with the real world. When Mary sees red for the first time, she does learn a new, non-physical fact, namely what the connection between her mental model and real red is. (I’d have to say that as something she can’t understand or express, it’s a weird kind of knowledge, but so be it.)

I think to talk of modelling so generally is misleading, though Jochen’s definition is itself broadly framed, which means I can’t say he’s wrong. In his terms it seems anything that uses data about the structure and causal functioning of X to make predictions about its behaviour would be a model. If you look at it that way, it’s true that virtually all our cognition is modelling. But to me a model leads us to think of something more comprehensive and enduring than we ought. In my mind at least, it conjures up a sort of model village or homunculus, when what’s really going on is something more fragmentary and ephemeral, with the brain lashing up a ‘model’ of my going to the shop for bread just now and then discarding it in favour of something different. I’d argue that we can’t have comprehensive all-purpose models of ourselves (or anything) because models only ever model features relevant to a particular purpose or set of circumstances. If a model reproduced all my features it would in fact be me (by Leibniz’ Law) and anyway the list of potentially relevant features goes on for ever.

The other thing I don’t like about liberal use of modelling is that it makes us vulnerable to the view that we only experience the model, not the world. People have often thought things like this, but to me it’s almost like the idea we never see distant planets, only telescope lenses.

Could qualia be the connection between model and world? It’s a clever idea, one of those that turn out on reflection to not be vulnerable to many of the counterarguments that first spring to mind. My main problem is that it doesn’t seem right phenomenologically. Arguments from one’s own perception of phenomenology are inherently weak, but then we are sort of relying on phenomenology for our belief (if any) in qualia in the first place. A red quale doesn’t seem like a connection, more like a property of the red thing; I’m not clear why or how I would be aware of this connection at all.

However, I think Jochen’s final conclusion is both poignant and broadly true. He suggests that models can have fundamental aspects, the ones that define their essential functions – but the world is not under a similar obligation. It follows that there are no fundamentals about the world as a whole.

I think that’s very likely true, and I’d make a very similar kind of argument in terms of explanation. There are no comprehensive explanations. Take a carrot. I can explain its nutritional and culinary properties, its biology, its metaphorical use as a motivator, its supposed status as the favourite foodstuff of rabbits, and lots of other aspects; but there is no total explanation that will account for every property I can come up with; in the end there is only the carrot. A demand for an explanation of the entire world is automatically a demand for just the kind of total explanation that cannot exist.

Although I believe this, I find it hard to accept; it leaves my mind with an unscratched itch. If we can’t explain the world, how can we assimilate it? Through contemplation? Perhaps that would have been what Laozi would have advocated. More likely he would have told us to get on with ordinary life. Stop thinking, and end your problems!



Are we losing it?

Nick Bostrom’s suggestion that we’re most likely living in a simulated world continues to provoke discussion.  Joelle Dahm draws an interesting parallel with multiverses. I think myself that it depends a bit on what kind of multiverse you’re going for – the ones that come from an interpretation of quantum physics usually require conservation of identity between universes – you have to exist in more than one universe – which I think is both potentially problematic and strictly speaking non-Bostromic. Dahm also briefly touches on some tricky difficulties about how we could tell whether we were simulated or not, which seem reminiscent of Descartes’ doubts about how he could be sure he wasn’t being systematically deceived by a demon – hold that thought for now.

Some of the assumptions mentioned by Dahm would probably annoy Sabine Hossenfelder, who lays into the Bostromisers with a piece about just how difficult simulating the physics of our world would actually be: a splendid combination of indignation with actually knowing what she’s talking about.

Bostrom assumes that if advanced civilisations typically have a long lifespan, most will get around to creating simulated versions of their own civilisation, perhaps re-enactments of earlier historical eras. Since each simulated world will contain a vast number of people, the odds are that any randomly selected person is in fact living in a simulated world. The probability becomes overwhelming if we assume that the simulations are good enough for the simulated people to create simulations within their own world, and so on.

There’s  plenty of scope for argument about whether consciousness can be simulated computationally at all, whether worlds can be simulated in the required detail, and certainly about the optimistic idea of nested simulations. But recently I find myself thinking, isn’t it simpler than that? Are we simulated people in a simulated world? No, because we’re real, and people in a simulation aren’t real.

When I say that, people look at me as if I were stupid, or at least, impossibly naive. Dude,  read some philosophy, they seem to say. Dontcha know that Socrates said we are all just grains of sand blowing in the wind?

But I persist – nothing in a simulation actually exists (clue’s in the name), so it follows that if we exist, we are not in a simulation. Surely no-one doubts their own existence (remember that parallel with Descartes), or if they do, only on the kind of philosophical level where you can doubt the existence of anything? If you don’t even exist, why do I even have to address your simulated arguments?

I do, though. Actually, non-existent people can have rather good arguments; dialogues between imaginary people are a long-established philosophical method (in my feckless youth I may even have indulged in the practice myself).

But I’m not entirely sure what the argument against reality is. People do quite often set out a vision of the world as powered by maths; somewhere down there the fundamental equations are working away and the world is what they’re calculating. But surely that is the wrong way round; the equations describe reality, they don’t dictate it. A system of metaphysics that assumes the laws of nature really are explicit laws set out somewhere looks tricky to me; and worse, it can never account for the arbitrary particularity of the actual world. We sort of cling to the hope that this weird specificity can eventually be reduced away by titanic backward extrapolation to a hypothetical time when the cosmos was reduced to the simplicity of a single point, or something like it; but we can’t make that story work without arbitrary constants and the result doesn’t seem like the right kind of explanation anyway. We might appeal instead to the idea that the arbitrariness of our world arises from it’s being an arbitrary selection out of the incalculable banquet of the multiverse, but that doesn’t really explain it.

I reckon that reality just is the thing that gets left out of the data and the theory; but we’re now so used to the supremacy of those two we find it genuinely hard to remember, and it seems to us that a simulation with enough data is automatically indistinguishable from real events – as though once your 3D printer was programmed, there was really nothing to be achieved by running it.

There’s one curious reference in Dahm’s piece which makes me wonder whether Christof Koch agrees with me. She says the Integrated Information Theory doesn’t allow for computer consciousness. I’d have thought it would; but the remarks from Koch she quotes seem to be about how you need not just the numbers about gravity but actual gravity too, which sounds like my sort of point.

Regular readers may already have noticed that I think this neglect of reality also explains the notorious problem of qualia; they’re just the reality of experience. When Mary sees red, she sees something real, which of course was never included in her perfect theoretical understanding.

I may be naive, but you can’t say I’m not consistent…

alien-superWe are in danger of being eliminated by aliens who aren’t even conscious, says Susan Schneider. Luckily, I think I see some flaws in the argument.

Humans are probably not the greatest intelligences in the Universe, she suggests; others probably have been going for billions of years longer. Perhaps, but maybe they have all attained enlightenment and moved on from this plane, leaving us young dummies the cleverest or the only people around?

Schneider thinks the older cultures are likely to be post-biological, having moved on into machine forms of intelligence. This transition may only take a few hundred years, she suggests, to ‘judge from the human experience’ (Have we transitioned? Did I miss it?). She says transistors are much faster than neurons and computer power is almost indefinitely expandable, so AI will end up much cleverer than us.

Then there may be a problem over controlling these superlatively bright computers, as foreseen by Stephen Hawking, Elon Musk, and Bill Gates. Bill Gates? The man who, by exploiting the monopoly handed to him by IBM was able to impose on us all the crippled memory management of DOS and the endless vulnerabilities of Windows? Well, OK; not sure he has much idea about technology, but he’s got form on trying to retain control of things.

Schneider more or less takes it for granted that computation is cogitation, and that faster computation means smarter thinking. It’s true that computers have become very good at games we didn’t think they could play at all, and she reminds us of some examples. But to take over from human beings, computers need more than just computation. To mention two things, they need agency and intentionality, and to date they haven’t shown any capacity at all for either. I don’t rule out the possibility of both being generated artificially in future, but the ever-growing ability of computers to do more sums more quickly is strictly irrelevant. Those future artificial people of whom we know nothing may be able to exploit the power of computation – but so can we. If computers are good at winning battles, our computers can fight their computers.

Schneider also takes it for granted that her computational aliens will be hostile and likely to come over and fuck us up good if they ever know we exist. They might, for example, infect our systems with computer viruses (probably not, I think, because without Bill Gates providing their operating systems computer viruses probably remained a purely theoretical matter for them). Sending signals out into the galaxy, she reckons, is a really bad idea; our radio signals are already out there but luckily it’s faint and easily missed (even by unimaginably super-intelligent aliens, it seems). Premature to worry, surely, because even our earliest radio signals can be no more than about a hundred light years away so far – not much of a distance in galactic terms. But why would super-intelligent entities behave like witless bullies anyway? Somewhere between benign and indifferent seems a more likely attitude.

To me this whole scenario seems to embody a selective prognosis anyway. The aliens have overcome the limitation of the speed of light, they feed off black holes (no clue, sorry) but they still run on the computation we currently think is really smart. A hundred years ago no-one would have supposed computation was going to be the dominant technology of our decade, let alone the next million years; maybe by 2116 we’ll look back on it the way we fondly remember steam locomotion.

Schneider’s most arresting thought is that her dangerous computational aliens might lack qualia, and so in that sense not be conscious. It seems to me more natural to suppose that acquiring human-style thought would necessarily involve acquiring human-style qualia. Schneider seems to share the Searlian view that qualia have something to do with unknown biological qualities of neural tissue which silicon can never share. Even if qualia could be engineered into silicon, why would the aliens bother, she asks – it’s just an extra overhead that might add unwanted ethical issues. Most surprisingly, she supposes that we might be able to test the proposition! Suppose that for medical reasons we replace parts of a functioning human brain with chips, we might then find that qualia are lost.

But how would we know? Ex hypothesi, qualia have no causal powers and so could not cause any change in our behaviour. Even if the qualia vanished, the fact could not be reported. None of the things we say about qualia were caused by qualia; that’s one of the bizarre things about them.

Anyway, I say if we’re going to indulge in this kind of wild speculation, let’s really go big; I say the super-intelligent aliens will be powered by hyper-computation, a technology that makes our concept of computation look like counting on your fingers; and they’ll have not only qualia, but hyper-qualia, experiential phenomenologica whose awesomeness we cannot even speak of. They will be inexpressibly kindly and wise and will be be borne to Earth to visit us on special wave-forms, beyond our understanding but hugely hyperbolic…

frankish-illusionConsciousness – it’s all been a terrible mistake. In a really cracking issue of the JCS (possibly the best I’ve read) Keith Frankish sets out and defends the thesis of illusionism, with a splendid array of responses from supporters and others.

How can consciousness be an illusion? Surely an illusion is itself a conscious state – a deceptive one – so that the reality of consciousness is a precondition of anything being an illusion? Illusionism, of course, is not talking about the practical, content-bearing kind of consciousness, but about phenomenal consciousness, qualia, the subjective side, what it is like to see something. Illusionism denies that our experiences have the phenomenal aspect they seem to have; it is in essence a sceptical case about phenomenal experience. It aims to replace the question of what phenomenal experience is, with the question of why people have the illusion of phenomenal experience.

In one way I wonder whether it isn’t better to stick with raw scepticism than frame the whole thing in terms of an illusion. There is a danger that the illusion itself becomes a new topic and inadvertently builds the confusion further. One reason the whole issue is so difficult is that it’s hard to see one’s way through the dense thicket of clarifications thrown up by philosophers, all demanding to be addressed and straightened out. There’s something to be said for the bracing elegance of the two-word formulation of scepticism offered by Dennett (who provides a robustly supportive response to illusionism here, as being the default case) – ‘What qualia?’. Perhaps we should just listen to the ‘tales of the qualophiles’ – there is something it is like, Mary knows something new, I could have a zombie twin – and just say a plain ‘no’ to all of them. If we do that, the champions of phenomenal experience have nothing to offer; all they can do is, as Pete Mandik puts it here, gesture towards phenomenal properties. (My imagination whimpers in fear at being asked to construe the space in which one might gesture towards phenomenal qualities, let alone the ineffable limb with which the movement might be performed; it insists that we fall back on Mandik’s other description; that phenomenalists can only invite an act of inner ostension.)

Eric Schwitzgebel relies on something like this gesturing in his espousal of definition by example as a means of getting the innocent conception of phenomenal experience he wants without embracing the dubious aspects. Mandik amusingly and cogently assails the scepticism of the illusionist case from an even more radical scepticism – meta-illusionism. Sceptics argue that phenomenalism can’t be specified meaningfully (we just circle around a small group of phrases and words that provide a set of synonyms with no definition outside the loop) , but if that’s true how do we even start talking about it? Whereof we cannot speak…

Introspection is certainly the name of the game, and Susan Blackmore has a nifty argument here; perhaps it’s the very act of introspecting that creates the phenomenal qualities? Her delusionism tells us we are wrong to think that there is a continuous stream of conscious experience going on in the absence of introspection, but stops short of outright scepticism about the phenomenal. I’m not sure. William James told us that introspection must be retrospection – we can only mentally examine the thought we just had, not the one we are having now – and it seems odd to me to think that a remembered state could be given a phenomenal aspect after the fact. Easier, surely, to consider that the whole business is consistently illusory?

Philip Goff is perhaps the toughest critic of illusionism; if we weren’t in the grip of scientism, he says, we should have no difficulty in seeing that the causal role of brain activity also has a categorical nature which is the inward, phenomenal aspect. If this view is incoherent or untenable in any way, we’re owed a decent argument as to why.

Myself I think Frankish is broadly on the right track. He sets out three ways we might approach phenomenal experience. One is to accept its reality and look for an explanation that significantly modifies our understanding of the world. Second, we look for an explanation that reconciles it with our current understanding, finding explanations within the world of physics of which we already have a general understanding. Third, we dismiss it as an illusion. I think we could add ‘approach zero’: we accept the reality of phenomenal experience and just regard it as inexplicable. This sounds like mysterianism – but mysterians think the world itself makes sense; we just don’t have the brains to see it. Option zero says there is actual irreducible mystery in the real world. This conclusion is surely thoroughly repugnant to most philosophers, who aspire to clear answers even if they don’t achieve them; but I think it is hard to avoid unless we take the sceptical route. Phenomenal experience is on most mainstream accounts something over and above the physical account just by definition. A physical explanation is automatically ruled out; even if good candidates are put forward, we can always retreat and say that they explain some aspects of experience, but not the ineffable one we are after. I submit that in fact this same strategy of retreat means that there cannot be any satisfactory rational account of phenomenal experience, because it can always be asserted that something ineffable is missing.

I say philosophers will find this repugnant, but I can sense some amiable theologians sidling up to me. Those light-weight would-be scientists can’t deal with mystery and the ineffable, they say, but hey, come with us for a bit…

Regular readers may possibly remember that I think that the phenomenal aspect of experience is actually just its reality; that the particularity or haecceity of real experience is puzzling to those who think that theory must accommodate everything. That reality is itself mysterious in some sense, though: not easily accounted for and not susceptible to satisfactory explanation either by induction or deduction. It may be that to understand that in full we have to give up on these more advanced mental tools and fall back on the basic faculty of recognition, the basis of all our thinking in my view and the capacity of which both deduction and induction are specialised forms. That implies that we might have to stop applying logic and science and just contemplate reality; I suppose that might mean in turn that meditation and the mystic tradition of some religions is not exactly a rejection of philosophy as understood in the West, but a legitimate extension of the same enquiry.

Yeah, but no; I may be irredeemably Western and wedded to scientism, but rightly or wrongly, meditation doesn’t scratch my epistemic itch. Illusionism may not offer quite the right answer, but for me it is definitely asking the right questions.

twinsCan we solve the Hard Problem with scanners? This article by Brit Brogaard and Dimitria E. Gatzia argues that recent advances in neuroimaging techniques, combined with the architectonic approach advocated by Fingelkurts and Fingelkurts, open the way to real advances.

But surely it’s impossible for physical techniques to shed any light on the Hard Problem? The whole point is that it is over and above any account which could be given by physics. In the Zombie Twin though experiment I have a physically identical twin who has no subjective experience. His brain handles information just the way mine does, but when he registers the colour red, it’s just data; he doesn’t experience real redness. If you think that is conceivable, then you believe in qualia, the subjective extra part of experience. But how could qualia be explained by neuroimaging; my zombie twin’s scans are exactly the same as mine, yet he has no qualia at all?

This, I think, is where the architectonics come in. The foundational axiom of the approach, as I understand it, is that the functional structure of phenomenal experience corresponds to dynamic structure within brain activity; the operational architectonics provide the bridge . (I call it an axiom, but I think the Fingelkurts twins would say that empirical research already provides support for a nested hierarchical structure which bridges the explanatory gap. They seem to take the view that operational architectonics uses a structured electrical field, which on the one hand links their view with the theories of Johnjoe McFadden and Sue Pockett, while on the other making me wonder whether advances in neuroimaging are relevant if the exciting stuff is happening outside the neurons.) It follows that investigating dynamic activity structures in the brain can tell us about the structure of phenomenal, subjective experience. That seems reasonable. After all, we might argue, qualia may be mysterious, but we know they are related to physical events; the experience of redness goes with the existence of red things in the physical world (with due allowance for complications). Why can’t we assume that subjective experience also goes with certain structured kinds of brain activity?

Two points must be made immediately. The first is that the hunt for Neural Correlates of Consciousness (NCCs) is hardly new. The advocates of architectonics, however, say that approaches along these lines fail because correlation is simply too weak a connection. Noticing that experience x and activation in region y correlate doesn’t really take us anywhere. They aim for something much harder-edged and more specific, with structured features of brain activity matched directly back to structures in an analysis of phenomenal experience (some of the papers use the framework of Revonsuo, though architectonics in general is not committed to any specific approach).

The second point is that this is not a sceptical or reductive project. I think many sceptics about qualia would be more than happy with the idea of exploring subjective experience in relation to brain structure; but someone like Dan Dennett would look to the brain structures to fully explain all the features of experience; to explain them away, in fact, so that it was clear that brain activity was in the end all we were dealing with and we could stop talking about ‘nonsensical’ qualia altogether.

By contrast the architectonic approach allows philosophers to retain the ultimate mystery; it just seeks to push the boundaries of science a bit further out into the territory of subjective experience. Perhaps Paul Churchland’s interesting paper about chimerical colours which we discussed a while ago provides a comparable case if not strictly an example.

Churchland points out that we can find the colours we experience mapped out in the neuronal structures of the brain; but interestingly the colour space defined in the brain is slightly more comprehensive than the one we actually encounter in real life. Our brains have reserved spaces for colours that do not exist, as it were. However, using a technique he describes we can experience these ‘chimerical’ colours, such as ‘dark yellow’ in the form of an afterglow. So here you experience for the first time a dark yellow quale, as predicted and delivered by neurology. Churchland would argue this shows rather convincingly that position in your brain’s colour space is essentially all there is to the subjective experience of colour. I think a follower of architectonics would commend the research for elucidating structural features of experience but hold that there was still a residual mystery about what dark yellow qualia really are in themselves, one that can only be addressed by philosophy.

It all seems like a clever and promising take on the subject to me; I do have two reservations. The first is a pessimistic doubt about whether it will ever really be possible to deliver much. The sort of finding reported by Churchland is the exception more than the rule. Vision and hearing offer some unusual scope because they both depend on wave media which impose certain interesting structural qualities; the orderly spectrum and musical scale. Imaginatively I find it hard to think of other aspects of phenomenal experience that seem to be good candidates for structural analysis. I could be radically wrong about this and I hope I am.

The other thing is, I still find it a bit hard to get past my zombie twin; if phenomenal experience matches up with the structure of brain activity perfectly, how come he is without qualia? The sceptics and the qualophiles both have pretty clear answers; either there just are no qualia anyway or they are outside the scope of physics. Now if we take the architectonic view, we could argue that just as the presence of red objects is not sufficient for there to be red qualia, so perhaps the existence of the right brain patterns isn’t sufficient either; though the red objects and the relevant brain activity do a lot to explain the experience. But if the right brain activity isn’t sufficient, what’s the missing ingredient? It feels (I put it no higher) as if there ought to be an explanation; but perhaps that’s just where we leave the job for the philosophers?

Edward WittenWe’ll never understand consciousness, says Edward Witten. Ashutosh Jogalekar’s post here features a video of the eminent physicist talking about fundamentals; the bit about consciousness starts around 1:10 if you’re not interested in string theory and cosmology. John Horgan has also weighed in with some comments; Witten’s view is congenial to him because of his belief that science may be approaching an end state in which many big issues are basically settled while others remain permanently mysterious. Witten himself thinks we might possibly get a “final theory” of physics (maybe even a form of string theory), but guesses that it would be of a tricky kind, so that understanding and exploring the theory would itself be an endless project, rather the way number theory, which looks like a simple subject at first glance, proves to be capable of endless further research.

Witten, in response to a slightly weird question from the interviewer, declines to define consciousness, saying he prefers to leave it undefined like one of the undefined terms set out at the beginning of a maths book. He feels confident that the workings of the mind will be greatly clarified by ongoing research so that we will come to understand much better how the mechanisms operate. But why these processes are accompanied by something like consciousness seems likely to remain a mystery; no extension of physics that he can imagine seems likely to do the job, including the kind of new quantum mechanics that Roger Penrose believes is needed.

Witten is merely recording his intuitions, so we shouldn’t try to represent him as committed to any strong theoretical position; but his words clearly suggest that he is an optimist on the so-called Easy Problem and a pessimist on the Hard one. The problem he thinks may be unsolvable is the one about why there is “something it is like” to have experiences; what it is that seeing a red rose has over and above the acquisition of mere data.

If so, I think his incredulity joins a long tradition of those who feel intuitively that that kind of consciousness just is radically different from anything explained or explainable by physics. Horgan mentions the Mysterians, notably Colin McGinn, who holds that our brain just isn’t adapted to understanding how subjective experience and the physical world can be reconciled; but we could also invoke Brentano’s contention that mental intentionality is just utterly unlike any physical phenomenon; and even trace the same intuition back to Leibniz’s famous analogy of the mill; no matter what wheels and levers you put in your machine, there’s never going to be anything that could explain a perception (particularly telling given Leibniz’s enthusiasm for calculating machines and his belief that one day thinkers could use them to resolve complex disputes). Indeed, couldn’t we argue that contemporary consciousness sceptics like Dennett and the Churchlands also see an unbridgeable gap between physics and subjective, qualia-having consciousness? The difference is simply that in their eyes this makes that kind of consciousness nonsense, not a mystery.

We have to be a bit wary of trusting our intuitions. The idea that subjective consciousness arises when we’ve got enough neurons firing may sound like the idea that wine comes about when we’ve added enough water to the jar; but the idea that enough ones and zeroes in data registers could ever give rise to a decent game of chess looks pretty strange too.

As those who’ve read earlier posts may know, I think the missing ingredient is simply reality. The extra thing about consciousness that the theory of physics fails to include is just the reality of the experience, the one thing a theory can never include. Of course, the nature of reality is itself a considerable mystery, it just isn’t the one people have thought they were talking about. If I’m right, then Witten’s doubts are well-founded but less worrying than they may seem. If some future genius succeeds in generating an artificial brain with human-style mental functions, then by looking at its structure we’ll only ever see solutions to the Easy Problem, just as we may do in part when looking at normal biological brains. Once we switch on the artificial brain and it starts doing real things, then experience will happen.

bokkenThe latest JCS features a piece by Christopher Curtis Sensei about the experience of achieving mastery in Aikido. It seems he spent fifteen years cutting bokken (an exercise with wooden swords, don’t ask me), becoming very proficient technically but never satisfying the old Sensei. Finally he despaired and stopped trying; at which point, of course, he made the required breakthrough. He needed to stop thinking about it. You do feel that his teacher could perhaps have saved him a few years if he had just said so explicitly – but of course you cannot achieve the state of not thinking about something directly and deliberately. Intending to stop thinking about a pink hippo involves thinking about a pink hippo; you have to do something else altogether.

This unreflective state of mind crops up in many places; it has something to do with the desirable state of ‘flow’ in which people are said to give their best sporting or artistic performances; it seems to me to be related to the popular notion of mindfulness, and it recalls Taoist and other mystical ideas about cleared minds and going with the stream. To me it evokes Julian Jaynes, who believed that in earlier times human consciousness manifested itself to people as divine voices; what we’re after here is getting the gods to shut up at last.

Clearly this special state of mind is a form of consciousness (we don’t pass out when we achieve it) and in fact on one level I think it is very simple. It’s just the absence of second-order consciousness, of thoughts about thoughts, in other words. Some have suggested that second-order thought is the distinctive or even the constitutive feature of human consciousness; but it seems clear to me that we can in fact do without it for extended periods.

All pretty simple then. In fact we might even be able to define it physiologically – it could be the state in which the cortex stops interfering and let’s the cerebellum and other older bits of the brain do their stuff uninterrupted – we might develop a way of temporarily zapping or inhibiting cortical activity so we can all become masters of whatever we’re doing at the flick of a switch. What’s all the fuss about?

Except that arguably none of the foregoing is actually about this special state of mind at all. What we’re talking about is unconsidered thought, and I cannot report it or even refer to it without considering it; so what have I really been discussing? Some strange ghostly proxy? Nothing? Or are these worries just obfuscatory playing with words?
There’s another mental thing we shouldn’t, logically, be able to talk about – qualia. Qualia, the ineffable subjective aspect of things, are additional to the scientific account and so have no causal powers; they cannot therefore ever have caused any of the words uttered or written about them. Is there a link here? I think so. I think qualia are pure first-order experiences; we cannot talk about them because to talk about them is to move on to second-order cognition and so to slide away from the very thing we meant to address. We could say that qualia are the experiential equivalent of the pure activity which Curtis Sensei achieved when he finally cut bokken the right way. Fifteen years and I’ll understand qualia; I just won’t be able to tell anyone about it…

Newton in doubtConsciousness is not a problem, says Michael Graziano in an Atlantic piece that is short and combative. (Also, I’m afraid, pretty sketchy in places. Space constraints might be partly to blame for that, but can’t altogether excuse some sweeping assertions made with the broadest of brushes.)

Graziano begins by drawing an analogy with Newton and his theory of light. The earlier view, he says, was that white light was pure, and colour happened when it was ‘dirtied’ by contact with the surfaces of coloured objects. The detail of exactly how this happened was a metaphysical ‘hard problem’. Newton dismissed all that by showing first, that white light is in fact a mixture of all colours, and second, that our vision produces only an inaccurate and simplified model of the reality, with only three different colour receptors.

Consciousness itself, Graziano says, is also a misleading model in a somewhat similar way, generated when the brain represents its own activity to itself. In fact, to be clear, consciousness as represented doesn’t happen; it is a mistaken construct, the result of the good-enough but far from perfect apparatus bequeathed to us by evolution (this sounds sort of familiar).

We should be clear that it is really Hard Problem consciousness that is the target here, the consciousness of subjective experience and of qualia. Not that the other sort is OK: Graziano dismisses the Easy Problem kind of consciousness, more or less in passing, as being no problem at all…

These days it’s not hard to understand how the brain can process information about the world, how it can store and recall memories, how it can construct self knowledge including even very complex self knowledge about one’s personhood and mortality. That’s the content of consciousness, and it’s no longer a fundamental mystery. It’s information, and we know how to build computers that process information.

Amazingly, that’s it. Graziano writes in an impatient tone; I have to confess to a slight ruffling of my own patience here; memory is not hard to understand? I had the impression that there were quite a number of unimpeachably respectable scientists working on the neurology of memory, but maybe they’re just doing trivial detail, the equivalent of butterfly collecting, or who knows, philosophy? …we know how to build computers… You know it’s not the 1980s any more? Yet apparently there are still clever people who think you can just say that the brain is a computer and that’s not only straightforwardly true, but pretty much a full explanation? I mean, the brain is also meat, and we know how to build tools that process meat; shall we stop there and declare the rest to be useless metaphysics?

‘Information’, as we’ve often noted before, is a treacherous, ambiguous word. If we mean something akin to data, then yes, computers can handle it; if we mean something akin to understanding, they’re no better than meat cleavers. Nothing means anything to a computer, while human consciousness reads and attributes meanings with prodigal generosity, arguably as its most essential, characteristic activity. No computer was ever morally responsible for anything, while our society is built around the idea that human beings have responsibilities, rights, and property. Perhaps Graziano has debunking arguments for all this that he hasn’t leisure to tell us about; the idea that they are all null issues with nothing worthwhile to be said about them just doesn’t fly.

Anyway, perhaps I should keep calm because that’s not even what Graziano is mainly talking about. He is really after qualia, and in that area I have some moderate sympathy with him; I think it’s true that the problem of subjective experience is most often misconceived, and it is quite plausible that the limitations of our sensory apparatus and our colour vision in particular contribute to the confusion. There is a sophisticated argument to be made along these lines: unfortunately Graziano’s isn’t it; he merely dismisses the issue: our brain plays us false and that’s it. You could perhaps get away with that if the problem were simply about our belief that we have qualia; it could be that the sensory system is just misinforming us, the way it does in the case of optical illusions. But the core problem is about people’s actual direct experience of qualia. A belief can be wrong, but an experience is still an experience even if it’s a misleading one, and the existence of any kind of subjective experience is the real core of the matter. Yes, we can still deny there is any such thing, and some people do so quite cogently, but to say that what I’m having now is not an experience but the mere belief that I’m having an experience is hard and, well, you know, actually rather metaphysical…

On examination I don’t think Graziano’s analogy with Newton works well. It’s not clear to me why the ‘older’ view is to be characterised as metaphysical (or why that would mean it was worthless). Shorn of the emotive words about dirt, the view that white light picks up colour from contact with coloured things, the way white paper picks up colour from contact with coloured crayons, seems a reasonable enough scientific hypothesis to have started with. It was wrong, but if anything it seems simpler and less abstract than the correct view. Newton himself would not have recognised any clear line between science and philosophy, and in some respects he left the true nature of light a more complicated matter, not fully resolved. His choice of particles over waves has proved to be an over-simplification and remains the subject of some cloudy ontology to this day.

Worse yet, if you think about it, it was Newton who first separated the two realms: colour as it is in the world and colour as we experience it. This is the crucial distinction that opened up the problem of qualia, first recognisably stated by Locke, a fervent admirer of Newton, some years after Newton’s work. You could argue therefore, that if the subject of qualia is a mess, it is a mess introduced by Newton himself – and scientists shouldn’t castigate philosophers for trying to clear it up.

dialogueTom has written a nice dialogue on the subject of qualia: it’s here.

Could we in fact learn useful lessons from talking to a robot which lacked qualia?

Perhaps not; one view would be that since the robot’s mind presumably works in the same way as ours, it would have similar qualia: or would think it did. We know that David Chalmers’ zombie twin talked and philosophised about its qualia in exactly the same way as the original.

It depends on what you mean by qualia, of course. Some people conceive of qualia as psychological items that add extra significance or force to experience; or as flags that draw attention to something of potential interest. Those play a distinct role in decision making and have an influence on behaviour. If robots were really to behave like us, they would have to have some functional analogue of that kind of qualia, and so we might indeed find that talking to them on the subject was really no better or worse than talking to our fellow human beings.

But those are not real qualia, because they are fully naturalised and effable things, measurable parts of the physical world. Whether you are experiencing the same blue quale as me would, if these flags or intensifiers were qualia, be an entirely measurable and objective question, capable of a clear answer. Real, philosophically interesting qualia are far more slippery than that.

So we might expect that a robot would reproduce the functional, a-consciousness parts of our mind, and leave the phenomenal, p-consciousness ones out. Like Tom’s robot they would presumably be puzzled by references to subjective experience. Perhaps, then, there might be no point in talking to them about it because they would be constitutionally incapable of shedding any light on it. they could tell us what the zombie life is like, but don’t we sort of know that already? They could play the kind of part in a dialogue that Socrates’ easily-bamboozled interlocutors always seemed to do, but that’s about it, presumably?

Or perhaps they would be able to show us, by providing a contrasting example, how and why it is that we come to have these qualia? There’s something distinctly odd about the way qualia are apparently untethered from physical cause and effect, yet only appear in human beings with their complex brains.  Or could it be that they’re everywhere and it’s not that only we have them, it’s more that we’re the only entities that talk about them (or about anything)?

Perhaps talking to a robot would convince us in the end that in fact, we don’t have qualia either: that they are just a confused delusion. One scarier possibility though, is that robots would understand them all too well.

“Oh,” they might say, “Yes, of course we have those. But scanning through the literature it seems to us you humans only have a very limited appreciation of the qualic field. You experience simple local point qualia, but you have no perception of higher-order qualia; the qualia of the surface or the solid, or the complex manifold that seems so evident to us. Gosh, it must be awful…”