Posts tagged ‘Hard Problem’

Is there a Hard Problem of physics that explains the Hard Problem of consciousness?

Hedda Hassel Mørch has a thoughtful piece in Nautilus’s interesting Consciousness issue (well worth a look generally) that raises this idea. What is the alleged Hard Problem of physics? She say it goes like this…

What is physical matter in and of itself, behind the mathematical structure described by physics?

To cut to the chase, Mørch proposes that things in themselves have a nature not touched by physics, and that nature is consciousness. This explains the original Hard Problem – we, like other things, just are by nature conscious; but because that consciousness is our inward essence rather than one of our physical properties, it is missed out in the scientific account.

I’m sympathetic to the idea that the original Hard Problem is about an aspect of the world that physics misses out, but according to me that aspect is just the reality of things. There may not, according to me, be much more that can usefully be said about it. Mørch, I think, takes two wrong turns. The first is to think that there are such things as things in themselves, apart from observable properties. The second is to think that if this were so, it would justify panpsychism, which is where she ends up.

Let’s start by looking at that Hard problem of physics.  Mørch suggests that physics is about the mathematical structure of reality, which is true enough, but the point here is that physics is also about observable properties; it’s nothing if not empirical. If things have a nature in themselves that cannot be detected directly or indirectly from observable properties, physics simply isn’t interested, because those things-in-themselves make no difference to any possible observation. No doubt some physicists would be inclined to denounce such unobservable items as absurd or vacuous, but properly speaking they are just outside the scope of physics, neither to be affirmed nor denied. It follows, I think, that this can’t be a Hard Problem of physics; it’s actually a Hard Problem of metaphysics.

This is awkward because we know that human consciousness does have physical manifestations that are readily amenable to physical investigation; all of our conscious behaviour, our speech and writing, for example. Our new Hard Problem (let’s call it the NHP) can’t help us with those; it is completely irrelevant to our physical behaviour and cannot give us any account of those manifestations of consciousness. That is puzzling and deeply problematic – but only in the same way as the old Hard Problem (OHP) – so perhaps we are on the right track after all?

The problem is that I don’t think the NHP helps us even on a metaphysical level. Since we can’t investigate the essential nature of things empirically, we can only know about it by pure reasoning; and I don’t know of any purely rational laws of metaphysics that tell us about it. Can the inward nature of things change? If so, what are the (pseudo-causal?) laws of intrinsic change that govern that process? If the inward nature doesn’t change, must we take everything to be essentially constant and eternal in itself? That Parmenidean changelessness would be particularly odd in entities we are relying on to explain the fleeting, evanescent business of subjective experience.

Of course Mørch and others who make a similar case don’t claim to present a set of a priori conclusions about their own nature; rather they suggest that the way we know about the essence of things is through direct experience. The inner nature of things is unknowable except in that one case where the thing whose inner nature is to be known is us. We know our own nature, at least. It’s intuitively appealing – but how do we know our own real nature? Why should being a thing bring knowledge of that thing? Just because we have an essential nature here’s no reason to suppose we are acquainted with that inner nature; again we seem to need some hefty metaphysics to explain this, which is actually lacking. All the other examples of knowledge I can think of are constructed, won through experience, not inherent. If we have to invent a new kind of knowledge to support the theory the foundations may be weak.

At the end of the day, the simplest and most parsimonious view, I think, is to say that things just are made up of their properties, with no essential nub besides. Leibniz’s Law tells us that that’s the nature of identity. To be sure, the list will include abstract properties as well as purely physical ones, but abstract properties that are amenable to empirical test, not ones that stand apart from any possible observation. Mørch disagrees:

Some have argued that there is nothing more to particles than their relations, but  intuition rebels at this claim. For there to be a relation, there must be two things being related. Otherwise, the relation is empty—a show that goes on without performers, or a castle constructed out of thin air.

I think the argument is rather that the properties of a particle relate to each other, while these groups of related properties relate in turn to other such groups. Groups don’t require a definitive member, and particles don’t require a single definitive essence. Indeed, since the particle’s essential self cannot determine any of its properties (or it could be brought within the pale of physics) it’s hard to see how it can have a defined relation to any of them and what role the particle-in-itself can play in Mørch’s relational show.

The second point where I think Mørch goes wrong is in the leap to panpsychism. The argument seems to be that the NHP requires non-structural stuff (which she likens to the hardware on which the software of the laws of physics runs – though I myself wouldn’t buy unstructured hardware); the OHP gives us the non-structural essence of conscious experience (of course conscious experience does have structure, but Mørch takes it that down there somewhere is the structureless ineffable something-it-is-like); why not assume that the latter is universal and fills the gap exposed by the NHP?

Well, because other matter exhibits no signs of consciousness, and because the fact that our essence is a conscious essence just wouldn’t warrant the assumption that all essences are conscious ones. Wouldn’t it be simpler to think that only the essences of outwardly conscious beings are conscious essences? This is quite apart from the many problems of panpsychism, which we’ve discussed before, and which Mørch fairly acknowledges.

So I’m not convinced, but the case is a bold and stimulating one and more persuasively argued than it may seem from my account. I applaud the aims and spirit of the expedition even though I may regret the direction it took.

Set the Hard Problem aside and tackle the real problem instead, says Anil K Seth in a thought-provoking phenomenological-investigationpiece in Aeon. I say thought-provoking; overall I like the cut of his jib and his direction of travel: most of what he says seems right. But somehow his piece kept stimulating the  cognitive faculty in my brain that generates quibbles.

He starts, for example, by saying that in philosophy a Cartesian debate over mind-stuff and matter-stuff continues to rage. Well, that discussion doesn’t look particularly lively or central to me. There are still people around who would identify as dualists in some sense, no doubt, but by and large my perception is that we’ve moved on. It’s not so much that monism won, more that that entire framing of the issue was left behind. ‘Dualist’, it seems to me is now mainly a disparaging term applied to other people, and whether he means it or not Seth’s remark comes across as having a tinge of that.

Indeed, he proceeds to say that David Chalmers’ hard/easy problem distinction is inherited from Descartes. I think he should show his working on that. The Hard Problem does have dualist answers, but it has non-dualist ones too. It claims there are things not accounted for by physics, but even monists accept that much. Even Seth himself surely doesn’t think that people who offer non-physics accounts of narrative or society must therefore be dualists?

Anyway, quibbling aside for now, he says we’ll get on better if we stop worrying about why consciousness exists at all and try instead to relate its features to the underlying biological processes. That is perfectly sensible. It is depressingly possible that the Hard Problem will survive every advance in understanding, even beyond the hypothetical future point when we have a comprehensive account of how the mind works. After all, we’re required to find it conceivable that my zombie twin could be exactly like me without having real subjective experience, so it must be possible that we could come to understand his mind totally without having any grasp on my qualia.

How shall we set about things, then? Seth proposes distinguishing between level of consciousness, contents, and self. That feels an uncomfortable list to me; this is uncharacteristically tidy-minded, but I like all members of a list to be exclusive and similar; whereas as Seth confirms, self here is to be seen as part of the contents. To me, it’s a bit as if he suggested that in order to analyse a performance of a symphony we should think about loudness, the work being performed, and the tune being played. That analogy points to another issue; ‘loudness’ is a legitimate quality of orchestral music but we need to remember that different instruments may play at different volumes and that the music can differ in quality in lots of other important ways. Equally, the level of consciousness is not really as simple as ten places on a dial.

Ah, but Seth recognises that. He distinguishes between consciousness and wakefulness. For consciousness it’s not the number of neurons involved or their level of activity that matters. It turns out to be complexity: findings by Massimini show that pulses sent into a brain in dreamless sleep produce simple echoes; sent into a conscious brain (whose overall level of activity may not be much greater) they produce complex reflected and transformed patterns. Seth hopes that this can be the basis of a ‘conscious meter’, the value of which for certain comatose patients is readily apparent. He is pretty optimistic generally about how much light this might shed on consciousness, rather as thermometers transformed…

“our physical understanding of heat (as average molecular kinetic energy)”

(Unexpectedly, a physics quibble; isn’t that temperature? Heat is transferred energy, isn’t it?)

Of course a complex noise is not necessarily a symphony and complex brain activity need not be conscious; Seth thinks it needs to be informative (whatever that may mean) and integrated. This of course links with Tononi’s Integrated Information Theory, but Seth sensibly declines to go all the way with that; to say that consciousness just is integrated information seems to him to be going too far; yielding again to the desire for deep, final yet simple answers, a search which just leads to trouble.

Instead he proposes, drawing on the ideas of Helmholtz, that we see the brain as a prediction machine. He draws attention to the importance of top-down influences on perception; that is, instead of building up a picture from the elements supplied by the senses, the brain often guesses what it is about to see and hear, and presents us with that unless contradicted by the senses – sometimes even if it is contradicted by the senses. This is hardly new (obviously not if it comes from Helmholtz (1821-1894)), but it does seem that Seth’s pursuit of the ‘real problem’ is yielding some decent research.

Finally Seth goes on to talk a little about the self. Here he distinguishes between bodily, perspectival, volitional, narrative and social selves. I feel more comfortable with this list than the other one – except that these are are all deemed to be merely experienced. You can argue that volition is merely an impression we have; that we just think certain things are under our conscious control – but you have to argue for it. Just including that implicitly in your categorisation looks a bit question-begging.

Ah, but Seth does go on to present at least a small amount of evidence. He talks first about a variant of the rubber hand experiment, in which said item is made to ‘feel’ like your own hand: it seems that making a virtual hand flash in time with the subject’s pulse enhances the impression of ownership (compared with a hand that flashes out of synch), which is indeed interesting. And he mentions that the impression of agency we have is reinforced when our predictions about the result are borne out. That may be so, but the fact that our impression of agency can be influenced by other factors doesn’t mean our agency is merely an impression – any more than a delusion about a flashing hand proves we don’t really have a hand.

But honestly, quibbles aside this is sensible stuff. Maybe I should give all that Hard Problem stuff a rest…

four hard problemsNot one Hard Problem, but four. Jonathan Dorsey, in the latest JCS, says that the problem is conceived in several different ways and we really ought to sort out which we’re talking about.

The four conceptions, rewritten a bit by me for what I hope is clarity, are that the problem is to explain why phenomenal consciousness:

  1. …arises from the physical (using only a physicalist ontology)
  2. …arises from the physical, using any ontology
  3. …arises at all (presumably from the non-physical)
  4. …arises at all or cannot be explained.

I don’t really see these as different conceptions of the problem (which simply seems to be the explanation of phenomenal consciousness), but rather as different conceptions of what the expected answer is to be. That may be nit-picking; useful distinctions in any case.  Dorsey offers some pros and cons for each of the four.

In favour of number one, it’s the most tightly focused. It also sits well in context, because Dorsey sees the problem as emerging under the dominance of physics. The third advantage is that it confines the problem to physicalism and so makes life easy for non-physicalists (not sure why this is held to be one of the pros, exactly). Against; well, maybe that context is dominating too much? Also the physicalist line fails to acknowledge Chalmers’ own naturalist but non-physicalist solution (it fails to acknowledge lots of other potential solutions too, so I’m not quite clear why Chalmers gets this special status at this point – though of course he did play a key role in defining the Hard Problem).

Number two’s pros and cons are mostly weaker versions of number one’s. It too is relatively well-focused. It does not identify the Hard Problem with the Explanatory Gap (that could be a con rather than a pro in my humble opinion). It fits fairly well in context and makes life relatively easy for traditional non-physicalists. It may yield a bit too much to the context of physics and it may be too narrow.

Number three has the advantage of focusing on the basics, and Dorsey thinks it gives a nice clear line between Hard and Easy problems. It provides a unifying approach – but it neglects the physical, which has always been central to discussion.

Number four provides a fully extended version of the problem, and makes sense of the literature by bringing in eliminativism. In a similar way it gives no-one a free pass; everyone has to address it. However, in doing so it may go beyond the bounds of a single problem and extend the issues to a wide swathe of philosophy of mind.

Dorsey thinks the answer is somewhere between 2 and 3; I’m more inclined to think it’s most likely between 1 and 2.

Let’s put aside the view that phenomenal consciousness cannot be explained. There are good arguments for that conclusion, but to me they amount to opting out of a game which is by no means clearly lost. So the problem is to explain how phenomenal consciousness arises. The explanation surely has to fit into some ontology, because we need to know what kind of thing phenomenal experience really is. My view is that the high-level differences between ontologies actually matter less than people have traditionally thought. Look at it this way: if we need an ontology, then it had better be comprehensive and consistent.  Given those two properties, we might as well call it a monism. because it encompasses everything and provides one view, even if that one view is complex.

So we have a monism: but it might be materialism, idealism, spiritualism, neutral monism, or many others. Does it matter? The details do matter, but if we’ve got one substance it seems to me it doesn’t matter what label we apply. Given that the material world and its ontology is the one we have by far the best knowledge of, we might as well call it materialism. It might turn out that materialism is not what we think, and it might explain all sorts of things we didn’t expect it to deal with, but I can’t see any compelling reason to call our single monist ontology anything else.

So what are the details, and what ontology have we really got? I’m aware that most regulars here are pretty radical materialists, with some exceptions (hat tip to cognicious); people who have some difficulty with the idea that the cosmos has any contents besides physical objects; uncomfortable with the idea of ideas (unless they are merely conjunctions of physical objects) and even with the belief that we can think about anything that isn’t robustly physical (so much for mathematics…). That’s not my view. I’m also a long way from from being a Platonist, but I do think the world includes non-physical entities, and that that doesn’t contradict a reasonable materialism. The world just is complex and in certain respects irreducible; probably because it’s real. Reduction, maybe, is essentially a technique that applies to ideas and theories: if we can come up with a simpler version that does the job,  then the simpler version is to be adopted. But it’s a mistake to think that that kind of reduction applies to reality itself; the universe is not obliged to conform to a flat ontology, and it does not. At the end of the day – and I say this with the greatest regret – the apprehension of reality is not purely a matter of finding the simplest possible description.

I believe the somewhat roomier kind of materialism I tend to espouse corresponds generally with what we should recognise as the common sense view, and this yields what might be another conception of the Hard Problem…

  1. …arises from the physical (in a way consistent with common sense)


boy blueAntti Revonsuo has a two-headed paper in the latest JCS; at least it seems two-headed to me – he argues for two conclusions that seem to be only loosely related; both are to do with the Hard Problem, the question of how to explain the subjective aspect of experience.

The first is a view about possible solutions to the Hard Problem, and how it is situated strategically. Revonsuo concludes, basically, that the problem really is hard, which obviously comes as no great surprise in itself. His case is that the question of consciousness is properly a question for cognitive neuroscience, and that equally cognitive neuroscience has already committed itself to owning the problem: but at present no path from neural mechanisms up to conscious experience seems at all viable. A good deal of work has been done on the neural correlates of consciousness, but even if they could be fully straightened out it remains largely unclear how they are to furnish any kind of explanation of subjective experience.

The gist of that is probably right, but some of the details seem open to challenge. It’s not at all clear to me that consciousness is owned by cognitive neuroscience; rather, the usual view is that it’s an intensely inter-disciplinary problem; indeed, that may well be part of the reason it’s so duffucult to get anywhere. Second, it’s not at all that clear how strongly committed cognitive neuroscience is to the Hard Problem. Consciousness, fair enough; consciousness is indeed irretrievably one of the areas addressed by cognitive neuroscience. But consciousness is a many-splendoured thing, and I think cognitive neuroscientists still have the option of ignoring or being sceptical about some of the fancier varieties, especially certain conceptions of the phenomenal experience which is the subject of the Hard Problem. It seems reasonable enough that you might study consciousness in the Easy Problem sense – the state of being conscious rather than unconscious, we might say – without being committed to a belief in ineffable qualia – let alone to providing a neurological explanation of them.

The second conclusion is about extended consciousness; theories that suggest conscious states are not simply states of the brain, but are partly made up of elements beyond our skull and our skin. These theories too, it seems, are not going to give us a quick answer in Revonsuo’s opinion – or perhaps any answer. Revonsuo invokes the counter example of dreams. During dreams, we appear to be having conscious experiences; yet the difference between a dream state and an unconscious state may be confined to the brain; in every other respect our physical situation may be identical. This looks like strong evidence that consciousness is attributable to brain states alone.

Once, Revonsuo acknowledges, it was possible to doubt whether dreams were really experiences; it could have been that they were false memories generated only at the moment of awakening; but he holds that research over recent years has eliminated this possibility, establishing that dreams happen over time, more or less as they seem to.

The use of dreams in this context is not a new tactic, and Revonsuo quotes Alva Noë’s counter-argument, which consists of three claims intended to undermine the relevance of dreams; first, dream experiences are less rich and stable than normal conscious experiences; second, dream seeing is not real seeing; and third, all dream experiences depend on prior real experiences. Revonsuo more or less gives a flat denial of the first, suggesting that the evidence is thin to non-existent:  Noë just hasn’t cited enough evidence. He thinks the second counter-argument just presupposes that experiences without external content are not real experiences, which is question-begging. Just because I’m seeing a dreamed object, does that mean I’m not really seeing? On the third point he has two counter arguments. Even if all dreams recall earlier waking experiences, they are still live experiences in themselves; they’re not just empty recall – but in any case, that isn’t true; people who are congenitally paraplegic have dreams of walking, for example.

I think Revonsuo is basically right, but I’m not sure he has absolutely vanquished the extended mind. For his dream argument to be a real clincher, the brain state of dreaming of seeing a sheep and the brain state of actually seeing a sheep have to be completely identical, or rather, potentially identical. This is quite a strong claim to make, and whatever the state of the academic evidence, I’m not sure how well it stands up to introspective examination. We know that we often take dreams to be real when we are having them, and in fact do not always or even generally realise that a dream is a dream: but looking back on it, isn’t there a difference of quality between dream states and waking states? I’m strongly tempted to think that while seeing a sheep is just seeing a sheep, the corresponding dream is about seeing a sheep, a little like seeing a film, one level higher in abstraction. But perhaps that’s just my dreams?

oldhardThe Hard Problem may indeed be hard, but it ain’t new:

Twenty years ago, however, an instant myth was born: a myth about a dramatic resurgence of interest in the topic of consciousness in philosophy, in the mid-1990s, after long neglect.

So says Galen Strawson in the TLS: philosophers have been talking about consciousness for centuries. Most of what he says, including his main specific point, is true, and the potted history of the subject he includes is good, picking up many interesting and sensible older views that are normally overlooked (most of them overlooked by me, to be honest). If you took all the papers he mentioned and published them together, I think you’d probably have a pretty good book about consciousness. But he fails to consider two very significant factors and rather over-emphasises the continuity of discussion in philosophy and psychology, leaving a misleading impression.

First, yes, it’s absolutely a myth that consciousness came back to the fore in philosophy only in the mid-1990s, and that Francis Crick’s book The Astonishing Hypothesis was important in bringing that about. The allegedly astonishing hypothesis, identifying mind and brain, had indeed been a staple of philosophical discussion for centuries.  We can also agree that consciousness really did go out of fashion at one stage: Strawson grants that the behaviourists excluded consciousness from consideration, and that as a result there really was an era when it went through a kind of eclipse.

He rather underplays that, though, in two ways. First, he describes it as merely a methodological issue. It’s true that the original behaviourists stopped just short of denying the reality of consciousness, but they didn’t merely say ‘let’s approach consciousness via a study of measurable behaviour’, they excluded all reference to consciousness from psychology, an exclusion that was meant to be permanent. Second, the leading behaviourists were just the banner bearers for a much wider climate of opinion that clearly regarded consciousness as bunk, not just a non-ideal methodological approach. Interestingly, it looks to me as if Alan Turing was pretty much of this mind. Strawson says:

But when Turing suggests a test for when it would be permissible to describe machines as thinking, he explicitly puts aside the question of consciousness.

Actually Turing barely mentions consciousness; what he says is…

The original question, “Can machines think?” I believe to be too meaningless to deserve discussion.

The question of consciousness must be at least equally meaningless in his eyes. Turing here sounds very like a behaviourist to me.

What he does represent is the appearance of an entirely new element in the discussion. Strawson represents the history as a kind of debate within psychology and philosophy: it may have been like that at one stage: a relatively civilised dialogue between the elder subject and its offspring. They’d had a bit of a bust-up when psychology ran away from home to become a science, but they were broadly friends now, recognising each other’s prerogatives, and with a lot of common heritage. But in 1950, with Turing’s paper, a new loutish figure elbowed its way onto the table: no roots in the classics, no long academic heritage, not even really a science: Artificial Intelligence. But the new arrival seized the older disciplines by the throat and shook them until their teeth rattled, threatening to take the whole topic away from them wholesale.  This seminal, transformational development doesn’t feature in Strawson’s account at all. His version makes it seem as if the bitchy tea-party of philosophy continued undisturbed, while in fact after the rough intervention of AI, psychology’s muscular cousin neurology pitched in and something like a saloon bar brawl ensued, with lots of disciplines throwing in the odd punch and even the novelists and playwrights hitching up their skirts from time to time and breaking a bottle over somebody’s head.

The other large factor he doesn’t discuss is the religious doctrine of the soul. For most of the centuries of discussion he rightly identifies, one’s permitted views about the mind and identity were set out in clear terms by authorities who in the last resort would burn you alive. That has an effect. Descartes is often criticised for being a dualist; we have no particular reason to think he wasn’t sincere, but we ought to recognise that being anything else could have got him arrested. Strawson notes that Hobbes got away with being a materialist and Hume with saying things that strongly suggested atheism; but they were exceptions, both in the more tolerant (or at any rate more disorderly) religious environment of Britain.

So although Strawson’s specific point is right, there really was a substantial sea change: earlier and more complex, but no less worthy of attention.

In those long centuries of philosophy, consciousness may have got the occasional mention, but the discussion was essentially about thought, or the mind. When Locke mentioned the inverted spectrum argument, he treated it only as a secondary issue, and the essence of his point was that the puzzle which was to become the Hard Problem was nugatory, of no interest or importance in itself.

Consciousness per se took centre stage only when religious influence waned and science moved in. For the structuralists like Wundt it was central, but the collapse of the structuralist project led directly to the long night of behaviourism we have already mentioned. Consciousness came back into the centre gradually during the second half of the twentieth century, but this time instead of being the main object of attention it was pressed into service as the last defence against AI; the final thing that computers couldn’t do. Whereas Wundt had stressed the scientific measurement of consciousness its unmeasurability was now the very thing that made it interesting. This meant a rather different way of looking at it, and the gradual emergence of qualia for the first time as the real issue. Strawson is quite right of course that this didn’t happen in the mid-nineties; rather, David Chalmers’ formulation cemented and clarified a new outlook which had already been growing in influence for several decades.

So although the Hard Problem isn’t new, it did become radically more important and central during the latter part of the last century; and as yet the sherriff still ain’t showed up.

gapWe’re often told that when facing philosophical problems, we should try to ‘carve them at the joints’. The biggest joint on offer in the case of consciousness has seemed to be the ‘explanatory gap’ between the physical activity of neurons and the subjective experience of consciousness. Now, in the latest JCS, Reggia, Monner, and Sylvester suggest that there is another gap, and one where our attention should rightly be focussed.

They suggest that while the simulation of certain cognitive processes has proceeded quite well, the project of actually instantiating consciousness computationally has essentially got nowhere.  That project, as they say, is affected by a variety of problems about defining and recognising success. But the real problem lies in an unrecognised issue: the computational explanatory gap. Whereas the explanatory gap we’re used to is between mind and brain, the computational gap is between high-level computational algorithms and low-level neuronal activity. At the high level, working top-down, we’ve done relatively well in elucidating how certain kinds of problem-solving, goal-directed kinds of computation work, and been able to simulate them relatively effectively.  At the neuronal, bottom-up level we’ve been able to explain certain kinds of pattern-recognition and routine learning systems. The two different kinds of processing have complementary strengths and weaknesses, but at the moment we have no clear idea of how one is built out of the other. This is the computational explanatory gap.

In philosophy, the authors plausibly claim, this important gap has been overlooked because in philosophical terms these are all ‘easy problem’ matters, and so tend to be dismissed as essentially similar matters of detail. In psychology, by contrast, the gap is salient but not clearly recognised as such: the lower-level processes correspond well to those identified as sub-conscious, while the higher-level ones match up with the reportable processes generally identified as conscious.

If Reggia, Monner and Sylvester are right, the well-established quest for the neural correlates of consciousness has been all very well, but what we really need is to bridge the gap by finding the computational correlates of consciousness. As a project, bridging this gap looks relatively promising, because it is all computational. We do not need to address any spooky phenomenology, we do not need to wonder how to express ‘what it is like’, or deal with anything ineffable; we just need to find the read-across between neural networking and the high-level algorithms which we can sort of see in operation. That may not be easy, but compared to the Hard Problem it sounds quite tractable. If solved, it will deliver a coherent account right from neural activity through to high-level decision making.

Of course, that leaves us with the Hard Problem unsolved, but the authors are optimistic that success might still banish the problem. They draw an analogy with artificial life: once it seemed obvious that there was a mysterious quality of being alive, and it was unclear how simple chemistry and physics could ever account for it. That problem of life has never been solved in terms, but as our understanding of the immensely subtle chemistry of living things has improved, it has gradually come to seem les and less obvious that it is really a problem. In a similar way the authors hope that if the computational explanatory gap can be bridged, so that we gradually develop a full account of cognitive processes from the ground-level firing of neurons up to high-level conscious problem-solving, the Hard Problem will gradually cease to seem like something we need to worry about.

That is optimistic, but not unreasonably so, and I think the new perspective offered is a very interesting and plausible one. I’m convinced that the gap exists and that it needs to be bridged: but I’m less sure that it can easily be done.  It might be that Reggia, Monner, and Sylvester are affected in a different way by the same kind of outlook they criticise in philosophers: these are all computational problems, so they’re all tractable. I’m not sure how we can best address the gap, and I suspect it’s there not just because people have failed to recognise it, but because it is also genuinely difficult to deal with.

For one thing, the authors tend to assume the problem is computational. It’s not clear that computation is of the essence here. The low-level processes at a neuronal level don’t look to be based on running any algorithm – that’s part of the nature of the gap. High-level processes may be capable of simulation algorithmically, but that doesn’t mean that’s the way the brain actually does it. Take the example of catching a ball – how do we get to the right place to intercept a ball flying through the air?  One way to do this would be some complex calculations about perspective and vectors: the brain could abstract the data, do the sums, and send back the instructions that result. We could simulate that process in a computer quite well. But we know – I think – that that isn’t the way it’s actually done: the brain uses a simpler and quicker process which never involves abstract calculation, but is based on straight matching of two inputs; a process which incidentally corresponds to a sub-optimal algorithm, but one that is good enough in practice. We just run forward if the elevation of the ball is reducing and back if it’s increasing. Fielders are incapable of predicting where a ball is going, but they can run towards the spot in such a way as to be there when the ball arrives.  It might be that all the ‘higher-level’ processes are like this, and that an attempt to match up with ideally-modelled algorithms is therefore categorically off-track.

Even if those doubts are right, however, it doesn’t mean that the proposed re-framing of the investigation is wrong or unhelpful, and in fact I’m inclined to think it’s a very useful new perspective.


Picture: Chalmers. The Conscious Mind was something of a blockbuster, as serious philosophical works go, so a big new book from David Chalmers is undoubtedly an event.  Anyone who might have been hoping for a recantation of his earlier views, or a radical new direction, will be disappointed – Chalmers himself says he is a little less enthusiastic about epiphenomenalism and a little more about a central place for intentionality, and that’s about it. The Character of Consciousness is partly a consolidation, bringing together pieces published separately over the last few years; but the restatement does also show how his views have developed, broadening into new areas while clarifying and reinforcing others.

What are those views? Chalmers begins by setting out again the Hard Problem (a term with which his name will forever be associated) of explaining phenomenal experience – why is it that ‘there is something it is like’ to experience colours, sound, anything? The key point is that experience is simply not amenable to the kind of reductive explanation which science has applied elsewhere; we’re not dealing with functions or capacities, so reduction can gain no traction. Chalmers notes – justly, I’m afraid – that many accounts which offer to explain the problem actually go on to consider one or other of the simpler problems instead (more contentiously he quotes the theories of Crick and Koch, and Bernard Baars, as examples). In this initial exposition Chalmers avoids quoting the picturesque thought experiments which are usually used, but the result is clear and readable; if you never read The Conscious Mind I think you could perhaps start here instead.

He is not, of course, content to leave subjective experience an insoluble mystery and offers a programme of investigation which (to drastically over-simplify) relies on some basic correspondences between the kind of awareness which is amenable to scientific investigation and the experience which isn’t. Getting at consciousness this way naturally tends to tell us about the aspects which relate to awareness rather than the inner nature of consciousness itself: on that, Chalmers tentatively offers the idea that it might be a second aspect of information (in roughly the sense defined by Claude Shannon).  I’m a little wary of information in this sense having a big metaphysical role – for what it’s worth I believe Shannon himself didn’t like his work being built on in this direction.

The next few chapters, following up on the project of investigating ineffable consciousness through its effable counterparts, deal with the much-discussed search for the neural correlates of consciousness (NCC). It’s a careful and not excessively over-optimistic account. While some simple correspondences between neural activity and specific one-off experiences have long been well evidenced,  I’m pessimistic myself about the possibility of NCCs in any general, useful form.  I doubt whether we would get all that much out of  a search for the alphabetic correlates of narrative, though we know that the alphabet is in some sense all you need, and the case of neurons and consciousness is surely no easier. Chalmers rightly suggests we need principles of interpretation: but once we’ve stopped talking about a decoding and are talking about an interpretation instead, mightn’t the essential point have slipped through our fingers?

The next step takes us on to ontology.  In Chalmers’ view, the epistemic gap, the fact that knowledge about the physics does not entail knowledge of the phenomenal, is a sign that that there is a real, ontological gap, too. Materialism is not enough: phenomenal experience shows that there’s more. He now gives us a fuller account of the arguments in favour of qualia, the items of phenomenal experience, being a real problem for materialism, and  categorises the positions typically taken (other views are of course possible).

  • Type A Materialism denies the epistemic gap: all this stuff about phenomenal experience is so much nonsense.
  • Type B Materialism accepts the epistemic gap, but thinks it can be dealt with within a materialist framework.
  • Type C Materialism sees the epistemic gap as a grave problem, but holds that in the limit, when we understand things better, we’ll understand how it can be reconciled with materialism.

In the other camp we have non-materialist views.

  • Type D dualism puts phenomenal experience outside the physical world, but gives it the power to influence material things,
  • Type E Dualism,  epiphenomenalism, also puts phenomenal experience outside the physical world, but denies that it can affect material things: it is a kind of passenger.

Finally we have the option that Chalmers appears to prefer:

  • Type F monism (not labelled as a materialism, you notice, though arguably it is). This is the view that consciousness is constituted by the intrinsic properties of physical entities: Chalmers suggests it might be called Russellian monism.

The point, as I understand it, is that we normally only deal with the external, ‘visible’ aspects of physical things: perhaps phenomenal experience is what they are intrinsically like in themselves – inside, as it were. I like this idea, though I suspect I come at it from the opposite direction: to Chalmers, it seems to mean something like those experiences you’re having – well, they’re the kind of thing that constitutes reality whereas to me it’s more you know reality – well that’s what you’re actually experiencing.  Chalmers’ way of looking at it has the advantage of leaving him positioned to investigate consciousness by proxy, whereas I must admit that my point of view tends to leave me with no way into the question of  what intrinsic reality is and makes mysterian scepticism (which I don’t like any more than Chalmers) look regrettably plausible.

Now Chalmers expounds the two-dimensional argument by which he sets considerable store. This is an argument intended to help us get  from an epistemic gap to an ontological one by invoking two-dimensional semantics and more sophisticated conceptions of possibility and conceivability.  It is as technical as that last sentence may have suggested. To illustrate its effects, Chalmers concentrates on the conceivability argument: this is basically the point often dramatised with zombies, namely that we can conceive of a world, or people, identical to the ones we’re used to in all physical respects but completely without phenomenal experience. This shows that there is something over and above the physical account, so materialism is false.  One rejoinder to this argument might be that the world is under no obligations to conform with our notions of what is conceivable; Chalmers, by distinguishing forms of conceivability and of possibility, and drawing out the relations between them, wants to say that in certain respects it is so obliged, so that either materialism is false or Russellian monism is true.  (Lack of space – and let’s be honest, brains – prevents me from giving a better account of the argument at the moment.)

Up to this point the book maintains a pretty good overall coherence, although Chalmers explicitly suggests that reading it straight through is only one approach and unlikely to be the best for most readers; from here on in it becomes more clearly an anthology of related pieces.

Chalmers gives us a new version of Mary the Colour Scientist (no constraint about the old favourites in this part of the book) in Inverted Mary. When original Mary sees a tomato for the first time she discovers that it causes the phenomenal experience of redness: when inverted Mary sees a tomato (we must assume that it is the same one, not a less ripe version) she discovers that it causes the phenomenal experience of greenness.  This and similar arguments have the alarming implication that the ineffability of qualia, of phenomenal experience, cannot be ring-fenced: it spills over at least into the intentionality of Mary’s knowledge and beliefs, and in fact evidently into a great deal of what we think, say and believe.  This looks worrying, but on reflection I’m not sure it’s such big news as it seems; it’s inherent in the whole problem of qualia that when we both look at a tomato I have no way of being sure that what you experience – and refer to – as red is the same as the thing I’m talking about. More comfortingly Chalmers goes on to defend a certain variety of infallibility for direct phenomenal beliefs.

Further chapters provide more evidence of Chalmers’ greater interest in intentionality: he reviews several forms of representationalism, the view that phenomenal experience has some intentional character (that is, it’s about or indicates something) and defends a narrow variety. He offers us a new version of the Garden of Eden, here pressed into service as a place where our experiences are direct and perfectly veridical. Chalmers uses the notion of Edenic content as a tool to break apart the constituents of experience; in fact, he seems eventually to convince himself that Edenic content is not only possible but fundamental, possibly the basis of perceptual experience. It’s an interesting idea.

Included here too is a nice piece on the metaphysics of the Matrix (the film, that is).  Chalmers entertainingly (and surely rightly) argues that the proposition that we are living in a matrix, a virtual reality world, is not sceptical, but metaphysical. It’s not, in fact, that we disbelieve in the world of the matrix, rather that we entertain some hypotheses about its ontological underpinnings. Even bits are things.

The book rounds things off with an attempt (co-authored with Tim Bayne) to sort out some of the issues surrounding the unity of consciousness, distinguishing access and phenomenal unity along the lines of Ned Block’s distinction between access and phenomenal consciousness, and upholding the necessity of phenomenal unity at least.

It’s a good, helpful book; what the content lacks in novelty it makes up in clarity. Chalmers has a persuasive style, and his expositions come across as moderate and sensible (perhaps the reduced epiphenomenalism helps a bit). It’s surprising that the denial of materialism (surely the dominant view of our time) can seem so common sense.

Picture: Clockwork Orange. Over at the Institute for Ethics and Emerging Technologies,  Martine Rothblatt offers to solve the Hard Problem for us in her piece “Can Consciousness be created in Software?”.  The Hard Problem, as regulars here will know, is how to explain subjective experience, the redness of red, the ineffable inner sense of what experience is like. Rothblatt’s account is quite short, and on my first reading I had slipped past the crucial sentences and discovered she was claiming victory before I quite realised what her answer was, so that I had to go back and read it again more carefully.

She says: “Redness is part of the gestalt impression obtained in a second or less from the immense pattern of neural connections we have built up about red things… With each neuron able to make as many as 10,000 connections, and with 100 billion neurons, there is ample possibility for each person to have subjective experiences through idiosyncratic patterns of connectivity. “ The point seems to be that the huge connectivity of the human brain makes it easy for everyone’s experience of redness to be unique.

This is an interesting and rather novel point of view. One traditional way of framing the Hard Problem is to ask whether the blue that I see is really the same as the blue that you see – or could it be equivalent to my orange? How would we know?  On Rothblatt’s view it would seem the answer must be that my blue is definitely not the same as yours, nor the same as anyone else’s; and nor is it the same as your orange, or Fred’s green for that matter, everyone having an experience of blue which is unique to them and their particular pattern of neuronal connection. I find this a slightly scary perspective, but not illogical. Presumably from Rothblatt’s point of view the only thing the different experiences have in common is that they all refer to the same blue things in the real world (or something like that).

Is it necessarily so, though?  Does the fact that our neuronal connections are different mean our experiences are different? I don’t think so. After all, I can write a particular sentence in different fonts and different sizes and colours of text;  I can model it in clay or project it on a screen, yet it remains exactly the same sentence. Differences in the substrate don’t matter.  We can go a step further: when people think of that same sentence, we must presume that the neuronal connections supporting the thought are different in each individual – yet it’s the same sentence they’re thinking. So why can’t different sets of neural connections support the same experience of blue in different heads?

Rothblatt’s account is a brief one, and perhaps she has some further theoretical points which explain the relationship between neuronal structures and subjective experiences in a way which would illuminate this. But hang on – wasn’t the relationship between neuronal structures and subjective experience the original problem? Slapping our foreheads, we realise that Rothblatt has palmed off on us an idea about uniqueness which actually doesn’t address the Hard Problem at all. The Hard Problem is not about how brains can accommodate a million unique experiences: it’s about how brains can accommodate subjectivity, how they make it true that there is something it is like to see red.

I fear this may be another example of the noticeable tendency of some theorists, especially those with a strong scientific background, to give us clockwork oranges, to borrow the metaphor Anthony Burgess used in a slightly different context: accounts that model some of the physical features quite nicely (see, it’s round, it hangs from trees effectively) while missing the essential juice which is really the whole point. According to Burgess, or at least to F. Alexander, the ill-fated character who expresses this view in the novel,  the point of a man is not orderly compliance with social requirements but, as it were, his juice, his inner reality.  I think the answer to the Hard Problem is indeed something to do with our reality (I should say that I mean ‘reality’ in a perfectly everyday sense:  Rothblatt, like some purveyors of mechanical fruit, is a little too quick to dismiss anything not strictly reducible to physics as mysticism) : we’re fine when we’re dealing with abstractions, it’s the concrete and particular which remains stubbornly inexplicable in any but superficial terms.  It’s my real experience of redness that causes the difficulty. Perhaps Rothblatt’s ideas about uniqueness are an attempt to capture the one-off quality of that experience; but uniqueness isn’t quite the point, and I think we need something deeper.