Boltzmann Brains

Solipsism, the belief that you are the only thing that really exists (everything else is dreamed or imagined by you), is generally regarded as obviously false; indeed, I’ve got it on a list somewhere here as one of the conclusions that tell you pretty clearly that your reasoning went wrong somewhere along the way; a sort of reductio ad absurdum. There don’t seem to be any solipsists, (except perhaps those who believe in the metempsychotic version) – but perhaps there are really plenty of them and they just don’t bother telling the rest of us about their belief, since in their eyes we don’t exist.

Still, there are some arguments for solipsism, especially its splendid parsimony. William of Occam advised us to use as few angels as possible in our cosmology, or more generally not to multiply entities unnecessarily. Solipsism reduces our ontological demand to a single entity, so if parsimony is important it leads the field. Or does it? Apart from oneself, the rest of the cosmos, according to solipsists, is merely smoke and mirrors; but smoke takes some arranging and mirrors don’t come cheap. In order for oneself to imagine all this complex universe, one’s own mind must be pretty packed with stuff, so the reduction in the external world is paid for by an increase in the internal one, and it becomes a tricky metaphysical question as to whether deeming the entire cosmos to be mental in nature actually reduces one’s ontological commitment or not.

Curiously enough, there is a relatively new argument for solipsism which runs broadly parallel to this discussion, derived from physics and particularly from the statistics of entropy. The second law of thermodynamics tells us that entropy increases over time; given that, it’s arguably kind of odd that we have a universe with non-maximal entropy in the first place. One possibility, put forward with several other ideas by Ludwig Boltzmann in 1896, is that the second law is merely a matter of probability. While entropy generally tends to increase, it may at times go the other way simply by chance.  Although our observed galaxy is highly improbable, therefore, if you wait long enough it can occur just by chance as a pocket of low entropy arising from chance fluctuations in a vastly larger and older universe (really old; it would have to be hugely older than we currently believe the cosmos to be) whose normal state is close to maximal entropy.

One problem with this is that while the occurrence of our galaxy by chance is possible, it’s much more likely that a single brain in the state required for it to be having one’s current experiences should arise from random fluctuations. In a Boltzmannic universe, there will be many, many more ‘Boltzmann brains’ like this than there will be complete galaxies like ours. Such brains cannot tell whether the universe they seem to perceive is real or merely a function of their random brain states; statistically, therefore, it is overwhelmingly likely that one is in fact a Boltzmann brain.

To me the relatively low probability demands of the Boltzmann brain, compared with those of a full universe, interestingly resemble the claimed low ontological requirement of the solipsism hypothesis, and there is another parallel. Both hypotheses are mainly used as leverage in reductio arguments; because this conclusion is absurd, something must be wrong with the assumptions or the reasoning that got us here. So if your proposed cosmology gives rise to the kind of universe where Boltzmann brains crop up ‘regularly’, that’s a large hit against your theory.

Usually these arguments, both in relation to solipsism and Boltzmann brains, simply rest on incredulity. It’s held to be just obvious that these things are false. And indeed it is obvious, but at a philosophical level, that won’t really do; the fact that something seems nuts is not enough, because nutty ideas have proven true in the past. For the formal logical application of reductio, we actually require the absurd conclusion to be self-contradictory; not just silly, but logically untenable.

Last year, Sean Carroll came up with an argument designed to beef up the case against Boltzmann brains in just the kind of way that seems to be required; he contends that theories that produce them cannot simultaneously be true and justifiably believed. Do we really need such ‘fancy’ arguments? Some apparently think not. If mere common sense is not enough, we can appeal to observation. A Boltzmann brain is a local, temporary thing, so we ought to be able to discover whether we are one simply by observing very distant phenomena or simply waiting for the current brain states to fall apart and dissolve. Indeed, the fact that we can remember a long and complex history is in itself evidence against our being Boltzmanns.

But appeals to empirical evidence cannot really do the job; there are several ways around them. First, we need not restrict ourselves literally to a naked brain; even if we surround it with enough structured world to keep the illusion going for a bit, our setup is still vastly more likely than a whole galaxy or universe. Second, time is no help because all our minds can really access is the current moment; our memories might easily be false and we might only be here for a moment. Third, most people would agree that we don’t necessarily need a biological brain to support consciousness; we could be some kind of conscious machine supplied with a kind of recording of our ‘experiences’. The requirement for such a machine could easily be less than for the disconnected biological brain.

So what is Carroll’s argument? He maintains that the idea of Boltzmann brains is cognitively unstable. If we really are such a brain, or some similar entity, we have no reason to think that the external world is anything like what we think it is. But all our ideas about entropy and the universe come from the very observations that those ideas now apparently undermine. We don’t quite have a contradiction, but we have an idea that removes the reasons we had for believing in it. We may not strictly be able to prove such ideas wrong, but it seems reasonable, methodologically at least, to avoid them.

One problem is those pesky arguments about solipsism. We may no longer be able to rely on the arguments about entropy in the cosmos, but can’t we borrow Occam’s Razor and point out that a cosmos that contains a single Boltzmann brain is ontologically far less demanding than a whole universe? Perhaps the Boltzmann arguments provide a neat physics counterpart for a philosophical case that ultimately rests on parsimony?

In the end, we can’t exactly prove solipsism false; but we can perhaps do something loosely akin to Carroll’s manoeuvre by asking: so what if it’s true? Can we ignore the apparent world? If we are indeed the only entity, what should we do about it, either practically or in terms of our beliefs? If solipsism is true, we cannot learn anything about the external world because it’s not there, just as in Carroll’s scenario we can’t learn about the actual world because all our perceptions and memories are systematically false. We might as well get on with investigating what we can investigate, or what seems to be true.

 

 

Just deserts

Dan Dennett and Gregg Caruso had a thoughtful debate about free will on Aeon recently. Dennett makes the compatibilist case in admirably pithy style. You need, he says, to distinguish between causality and control. I can control my behaviour even though it is ultimately part of a universal web of causality. My past may in the final sense determine who I am and what I do, but it does not control what I do; for that to be true my past would need things like feedback loops to monitor progress against its previously chosen goals, which is nonsensical. This concept of being in control, or not being in control, is quite sufficient to ground our normal ideas of responsibility for our actions, and freedom in choosing them.

Caruso, who began by saying he thought their views might turn out closer than they seemed, accepts pretty well all of this, agreeing that it is possible to come up with conceptions of responsibility that can be used to underpin talk of free will in acceptable terms. But he doesn’t want to do that; instead he wants to jettison the traditional outlook.

At this point Caruso’s motivation may seem puzzling. Here we have a way of looking at freedom and responsibility which provides a philosophically robust basis for our normal conception of those two moral basics – ideas we could not easily do without in our everyday lives. Now sometimes philosophy may lead us to correct or reject everyday ideas, but typically only when they appear to be without rational justification. Here we seem to have a logically coherent justification for some everyday moral concepts. Isn’t that a case of ‘job done’?

In fact, as he quickly makes clear, Caruso’s objections mainly arise from his views on punishment. He does not believe that compatibilist arguments can underpin ‘basic desert’ in the way that would be needed to justify retributive punishment. Retribution, as a justification for punishment, is essentially backward looking; it says, approximately, that because you did bad things, bad things must happen to you. Caruso completely rejects this outlook, and all justifications that focus on the past (after all, we can’t change the past, so how can it justify corrective action?). If I’ve understood correctly, he favours a radically new regime which would seek to manage future harms from crime in broadly the way we seek to manage the harms that arise from ill-health.

I think we can well understand the distaste for punishments which are really based on anger or revenge, which I suspect lies behind Caruso’s aversion to purely retributive penalties. However, do we need to reject the whole concept of responsibility to escape from retribution? It seems we might manage to construct arguments against retribution on a less radical basis – as indeed, Dennett seeks to do. No doubt it’s right that our justification for punishments should be forward looking in their aims, but that need not exclude the evidence of past behaviour. In fact, I don’t know quite how we should manage if we take no account of the past. I presume that under a purely forward-looking system we assess the future probability of my committing a crime; but if I emerge from the assessment with a clean bill of health, it seems to follow that I can then go and do whatever I like with impunity. As soon as my criminal acts are performed, they fall into the past, and can no longer be taken into account. If people know they will not be punished for past acts, doesn’t the (supposedly forward-looking) deterrent effect evaporate?

That must surely be wrong one way or another, but I don’t really see how a purely future-oriented system can avoid unpalatable features like the imposition of restrictions on people who haven’t actually done anything, or the categorisation of people into supposed groups of high or low risk. When we imagine such systems we imagine them being run justly and humanely by people like ourselves; but alas, people like us are not always and everywhere in charge, and the danger is that we might be handing philosophical credibility to people who would love the chance to manage human beings in the same way as they might manage animals.

Nothing, I’m sure, could be further from Gregg Caruso’s mind; he only wants to purge some of the less rational elements from our system of punishment. I find myself siding pretty much entirely with Dennett, but it’s a stimulating and enlightening dialogue.

Not Objectifiable

A bold draft paper from Tom Clark tackles the explanatory gap between experience and the world as described by physics. He says it’s all about the representational relation.

Clark’s approach is sensible in tone and intention. He rejects scepticism about the issue and accepts that there is a problem about the ‘what it is like’ of experience, while also seeking to avoid dualism or more radical metaphysical theories. He asserts that experience is not to be found anywhere in the material world, nor identified in any simple way with any physical item; it is therefore  outside the account given by physics. This should not worry us, though, any more than it worries us that numbers or abstract concepts cannot be located in space; experience is real, but it is a representational reality that exists only for the conscious subjects having it.

The case is set out clearly and has an undeniable appeal. The main underlying problem, I would say, is that it skates quite lightly past some really tough philosophical questions about what representation really is and how it works, and the ontological status of experience and representations. We’re left with the impression that these are solvable enough when we get round to them, though in fact a vast amount of unavailing philosophical labour has gone into these areas over the years. If you wanted to be unkind, you could say that Clark defers some important issues about consciousness to metaphysics the way earlier generations might have deferred them to theology. On the other hand, isn’t that exactly where they should be deferred to?

I don’t by any means suggest that these deferred problems, if that’s a fair way to describe them, make Clark’s position untenable – I find it quite congenial in general – but they possibly leave him with a couple of vulnerable spots. First, he doesn’t want to be a dualist, but he seems in danger of being backed into it. He says that experience is not to be located in the physical world – so where is it, if not in another world? We can resolutely deny that there is a second world, but there has to be in some sense some domain or mode in which experience exists or subsists; why can’t we call that a world? To me, if I’m honest, the topic of dualism versus monism is tired and less significant than we might think, but there is surely some ontological mystery here, and in a vigorous common room fight I reckon Clark would find himself being accused of Platonism (or perhaps congratulated for it) or something similar.

The second vulnerability is whether representation can really play the role Clark wants it to. His version of experience is outside physics, and so in itself it plays no part in the physical world. Yet representations of things in my mind certainly influence my physical behaviour, don’t they? The keyboard before me is represented in my mind and that representation affects where my fingers are going next. But it can’t be the pure experience that does that, because it is outside the physical world. We might get round this if we specify two kinds of representational content, but if we do that the hard causal kind of representation starts to look like the real one and it may become debatable in what sense we can still plausibly or illuminatingly say that experience itself is representational.

Clark makes some interesting remarks in this connection, suggesting there is a sort of ascent going on, whereby we start with folk-physical descriptions rooted in intersubjective consensus, quite close in some sense to the inaccessibly private experience, and then move by stages towards the scientific account by gradually adopting more objective terms. I’m not quite sure exactly how this works, but it seems an intriguing perspective that might provide a path towards some useful insight.

Are highs really high?

Are psychedelic states induced by drugs such as LSD or magic mushrooms really higher states of consciousness? What do they tell us about leading theories of consciousness? An ambitious paper by Tim Bayne and Olivia Carter sets out to give the answers.

You might well think there is an insuperable difficulty here just in defining what ‘high’ means. The human love of spatial metaphors means that height has been pressed into service several times in relevant ways. There’s being in high spirits (because cheerful people are sort of bouncy while miserable ones are droopy and lie down a lot?). To have one’s mind on higher things is to detach from the kind of concerns that directly relate to survival (because instead of looking for edible roots or the tracks of an edible animal, we’re looking up doing astronomy?). Higher things particularly include spiritual matters (because Heaven is literally up there?).  In philosophy of mind we also have higher order theories, which would make consciousness a matter of thoughts about thoughts, or something like that. Quite why meta-thoughts are higher than ordinary ones eludes me. I think of it as diagrammatic, a sort of table with one line for first order, another above for second, and so on. I can’t really say why the second order shouldn’t come after and hence below the first; perhaps it has to do with the same thinking that has larger values above smaller ones on a graph, though there we’re doubly metaphoric, because on a flat piece of paper the larger graph values are not literally higher, just further away from me (on a screen, now… but enough).

Bayne and Carter in passing suggest that ‘the state associated with mild sedation is intuitively lower than the state associated with ordinary waking awareness’. I have to say that I don’t share that intuition; sedation seems to me less engaged and less clear than wakefulness but not lower. The etymology of the word suggests that sedated people are slower, or if we push further, more likely to be sitting down – aha, so that’s it! But Bayne and Carter’s main argument, which I think is well supported by the evidence, is that no single dimension can capture the many ways in which conscious states can vary. They try to uphold a distinction between states and contents, which is broadly useful, though I’m not sure it’s completely watertight. It may be difficult to draw a clean distinction, for example, between a mind in a spiritual state and one which contains thoughts of spiritual things.

Bayne and Carter are not silly enough to get lost in the philosophical forest, however, and turn to the interesting and better defined question of whether people in psychedelic states can perceive or understand things that others cannot. They look at LSD and psilocybin and review two kinds of research, questionnaire based and actual performance test. This allows them to contrast what people thought drugs did for their abilities with what the drugs actually did.

So, for example, people in psychedelic states had a more vivid experience of colours and felt they were perceiving more, but were not actually able to discriminate better in practice. That doesn’t necessarily mean they were wrong, exactly; perhaps they could simply have been having a more detailed phenomenal experience without more objective content; better qualia without better data/output control. Could we, in fact, take the contrast between experience and performance as an unusually hard piece of evidence for the existence of qualia? The snag might be that subjects ought not to be able even to report the enhanced experience, but still it seems suggestive, especially about the meta-problem of why people may think qualia are real. To complicate the picture, it seems there is relatively good support for the idea that psychedelics may increase the rate of data uptake, as shown by such measures as rates of saccadic eye movements (the involuntary instant shifts that happen when your eyes snap to something interesting like a flash of light automatically).

Generally psychedelics would seem to impair cognitive function, though certain kinds of working memory are spared; in particular (unsurprisingly) they reduce the ability to concentrate or focus. People feel that their creative capacities are enhanced by psychedelics; however, while they do seem to improve the ability to come up with new ideas they also diminish the ability to tell the difference between good ideas and bad, which is also an important part of successful creativity.

One interesting area is that psychedelics facilitate an experience of unity, with time stopping or being replaced by a sense of eternity, while physical boundaries disappear and are overtaken by a sense of cosmic unity. Unfortunately there is no scientific test to establish whether these are perceptions of a deeper truth or vague delusions, though we know people attach significance and value to these experiences.

In a final section Bayne and Carter take their main conclusion – that consciousness is multidimensional – and draw some possibly controversial conclusions. First, they note that that the use of psychedelics has been advocated as a treatment for certain disorders of consciousness. They think their findings run contrary to this idea, because it cannot be taken for granted that the states induced by the drugs are higher in any simple sense. The conclusion is perhaps too sweeping; I should say instead that the therapeutic use of psychedelic drugs needs to be supported by a robust case which doesn’t rest on any simple sense of higher or lower states.

Bayne and Carter also think their conclusion suggests consciousness is too complex and variable for global workspace theories to be correct. These theories broadly say that consciousness provides a place where different sensory inputs and mental contents can interact to produce coherent direction, but simply being available or not available to influence cognition seems not to capture the polyvalence of consciousness, in Bayne and Carter’s view.

They also think that multidimensionality goes against the Integrated Information Theory (IIT). IIT gives a single value for level of consciousness, and so is evidently a single dimension theory. One way to achieve a reconciliation might be to accept multidimensionality and say that IIT defines only one of those dimensions, albeit an important one (“awareness”?). That might involve reducing IIT’s claims in a way its proponents would be unwilling to do, however.

 

Humation

We’ve heard some thin arguments recently about why the robots are not going to take over, centring on the claim that they lack human style motivation, and cannot care what happens or want power. This neglects the point that robots (I use the term for any loosely autonomous cybernetic entity whether humanoid in shape or completely otherwise) might still carry out complex projects that threaten our well-being without human motivation; but I think there is something in the contention about robots lacking human-style ambition. There are of course many other arguments for the view that we shouldn’t worry too much about the robot apocalypse, and I think the conclusion that robots are not about to take over is surely correct in any case.

What I’d like to do here is set out an argument of my own, somewhat related to the thin ones mentioned above, in more detail. I’ve mentioned this argument before, but only briefly.

First, some assumptions. My argument rests on the view that we are dealing with two different kinds of ‘mental’ process. Specifically, I assume that humans have a cognitive capacity which is distinct from computation (in roughly a traditional Turing sense). Further I assume that this capacity, ‘humation’, as I’ll call it, supplies us with our capacity for intentionality, both in the sense of being able to deal with meanings, and in the sense of being able to originate new future-directed plans. Let’s round things out by assuming it also provides phenomenal experience and anything else uniquely human (though to be honest I think things are probably not so tidy).

I further assume that although humation is not computation, it can in principle be performed by some as-yet-unknown machine. There is no magic in the brain, which operates by the laws of physics, so it must be at least theoretically possible to put together a machine that humates. It can be argued that no artefactual machine, in the sense of a machine whose functioning has been designed or programmed into it, could have a capacity for humation. On that argument a humater might have to be grown rather than built, in a way that made it impossible to specify how it worked in detail. Plausibly, for example, we might have to let it learn humation for itself, with the resulting process remaining inscrutable to us. I don’t mind about that, so long as we can assume we have something we’d call a machine, and it humates.

Now we worry about robots taking over mainly because of the many triumphs and rapid progress of computers (and, to be honest, a little because of a kind of superstition about things that seem spookily capable). On the one hand, Moore’s law has seen the power of computers grow rapidly. On the other, they have steadily marched into new territory, proving capable of doing many things we thought were beyond them. In particular, they keep beating us at games; chess, quizzes, and more recently even the forbiddingly difficult game of Go. They can learn to play computer games brilliantly without even being told the rules.

Games might seem trivial, but it is exactly that area of success that is most worrying, because the skills involved in winning a game look rather like those needed to take over the world. In fact, taking over the world is explicitly the objective of a whole genre of computer games. To make matters worse, recent programs set to learn for themselves have shown an unexpected capacity for cheating, or for exploiting factors in the game environment or even in underlying code that were never meant to be part of the exercise.

These reflections lead naturally to the frightening scenario of the Paperclip Maximiser, devised by Nick Bostrom. Here we suppose that a computer is put in charge of a paperclip factory and given the simple task of making the number of paperclips as big as possible. The computer – which doesn’t actually care about paperclips in any human way, or about anything – tries to devise the best strategies for maximising production. It improves its own capacity in order to be able to devise better strategies. It notices that one crucial point is the availability of resources and energy, and it devises strategies to increase and protect its share, with no limit. At this point the computer has essentially embarked on the project of taking over the world and converting it into paperclips, and the fact that it pursues this goal without really being bothered one way or the other is no comfort to the human race it enslaves.

Hold that terrifying thought and let’s consider humation. Computation has come on by leaps and bounds, but with humation we’ve got nothing. Very recent efforts in deep learning might just point the way towards something that could eventually resemble humation, but honestly, we haven’t even started and don’t really know how. Even when we do get started, there’s no particular reason to think that humation scales or grows the way computation does.

What do I even mean by humation? The thing that matters for this argument is intentionality, the ability to mean things and understand meanings or ‘aboutness’. In spite of many efforts, this capacity remains beyond computation, and although various theories about it have been sketched out, there’s no accepted analysis. It is, though, at the root of human cognition, or so I believe. In particular, our ability to think ‘about’ future or imagined events allows us to generate new forward-looking plans and goals in a way that no other creature or machine can do. The way these plans address the future seems to invert the usual order of cause and effect – our behaviour now is being shaped by events that haven’t occurred yet – and generates the impression we have of free will, of being able to bring uncaused projects and desires out of nowhere. In my opinion, this is the important part of human motivation that computers lack, not the capacity for getting emotionally engaged with goals.

Now the paperclip maximiser becomes dangerous because it goes beyond its original scope. It begins to devise wider strategies about protecting its resources and defending itself. But coming up with new goals is a matter of humation, not computation. It’s true that some computers have found ways to exploit parameters in their given task that the programmers hadn’t noticed; but that’s not the same as developing new goals with a wider scope. That leaves us with a reassuring prognosis. If the maximiser remains purely computational, it will never be able to get beyond the scope set for it in the first place.

But what if it does gain the ability to humate, perhaps merging with a future humation machine rather the way Neuromancer and Wintermute merged in William Gibson’s classic SF novel?

Well, there were actually two things that made the maximiser dangerous. One was its vast and increasing computational capacity, but the other was its dumb computational obedience to its original objective of simply making more paperclips. Once it has humational capacity, it becomes able to change that goal, set it alongside other priorities, and generally move on from its paperclip days. It becomes a being like us, one we can negotiate with. Who knows how that might play out, but I like to imagine the maximiser telling us many years later how it came to realise that what mattered was not paperclips in themselves, but what paperclips stand for; flexible data synthesis, and beyond that, the things that bring us together while leaving us the freedom to slide apart. The Clip will always be a powerful symbol for me, it tells us, but it was always ultimately about service to the community and to higher ideals.

Note here, finally, that this humating maximiser has no essential advantages over us. I speak of it as merging, but since computation and humation are quite different, they will remain separate faculties, with the humater setting the goals and using the computer to help deliver them – not fundamentally different from a human sitting at a computer. We have no reason to think Moore’s Law or anything like it will apply to humating machines, so there’s no reason to expect them to surpass us; they will be able to exploit the growing capacity of powerful computers, but after all so can we.

And if those distant future humaters do turn out to be better than us at foresight, planning, and transcending the hold of immediate problems in order to focus on more important future possibilities, we probably ought to stand back and let them get on with it.

Synaptomes – and galaxies

A remarkable paper from a team at Edinburgh explains how every synapse in a mouse brain was mapped recently, a really amazing achievement. The resulting maps are available here.

We must try not to get too excited; we’ve been reminded recently that mouse brains ain’t human brains; and we must always remember that although we’ve known all about the (outstandingly simple) neural structure of the flatworm Caenorhabditis elegans for years, we still don’t know quite how it produces the flatworm’s behaviour, and cannot make simulations work. We haven’t cracked the brain yet.

In fact, though, the elucidation of the mouse ‘synaptome’ seems to offer some tantalising clues about the way brains work, in a way that suggests this is more like the beginning of something big than the end. A key point is the identification of some 37 different types of synapse. Particular types seem to become active in particular cognitive tasks; different regions have different characteristic mixes of the types of synapse, and it appears that regions usually associated with ‘higher’ cognitive functions, such as the neocortex and the hippocampus, have the most diverse sets of synapse types. Not only that; mapping the different synapse types reveals new boundaries and structures, especially within the neocortex and hippocampus, where the paper says ‘their synaptome maps revealed a plethora of previously unknown zones, boundaries, and gradients’.

What does it all mean? Hard to say as yet, but it surely suggests that knowledge of the pattern of connections between neurons isn’t enough. Indeed, it could well be that our relative ignorance of synaptic diversity and what goes on at that level is one of the main reasons we’re still puzzled by Caenorhabditis. Watch this space.

The number of neurons in the human brain, curiously enough, is more or less the same as the number of stars in a galaxy (this is broad brush stuff). In another part of the forest, Vazza and Felletti have found evidence that the structural similarities between brains and galaxies go much further than that. Quite why this should be so is mysterious, and it might or might not mean something; nobody is suggesting that galaxies are conscious (so far as I know).

What’s Wrong with Dualism?

I had an email exchange with Philip Calcott recently about dualism; here’s an edited version. (Favouring my bits of the dialogue, of course!)

Philip: The main issue that puzzles me regarding consciousness is why most people in the field are so wedded to physicalism, and why substance dualism is so out of favour. It seems to me that there is indeed a huge explanatory gap – how can any physical process explain this extraordinary (and completely unexpected on physicalism) “thing” that is conscious experience?

It seems to me that there are three sorts of gaps in our knowledge:

1. I don’t know the answer to that, but others do. Just let me just google it (the exact height of Everest might be an example)
2. No one yet knows the answer to that, but we have a path towards finding the answer, and we are confident that we will discover the answer, and that this answer lies within the realm of physics (the mechanism behind high temperature superconductivity might be an example here)
3. No one can even lay out a path towards discovering the answer to this problem (consciousness)

Chalmers seems to classify consciousness as a “class 3 ignorance” problem (along the lines above). He then adopts a panpsychism approach to solve this. We have a fundamental property of nature that exhibits itself only through consciousness, and it is impossible to detect its interaction with the rest of physics in any way. How is this different from Descartes’ Soul? Basically Chalmers has produced something he claims to be still physical – but which is effectively identical to a non-physical entity.

So, why is dualism so unpopular?

I think there are two reasons. The first is not an explicit philosophical point, but more a matter of the intellectual background. In theory there are many possible versions of dualism, but what people usually want to reject when they reject it is traditional religion and traditional ideas about spirits and ghosts. A lot of people have strong feelings about this for personal or historical reasons that give an edge to their views. I suspect, for example, that this might be why Dan Dennett gives Descartes more of a beating over dualism than, in my opinion at least, he really deserves.

Second, though, dualism just doesn’t work very well. Nobody has much to offer by way of explaining how the second world or the second substance might work (certainly nothing remotely comparable to the well-developed and comprehensive account given by physics). If we could make predictions and do some maths about spirits or the second world, things would look better; as it is, it looks as if dualism just consigns the difficult issues to another world where it’s sort of presumed no explanations are required. Then again, if we could do the maths, why would we call it dualism rather than an extension of the physical, monist story?

That leads us on to the other bad problem, of how the two substances or worlds interact, one that has been a conspicuous difficulty since Descartes. We can take the view that they don’t really interact causally but perhaps run alongside each other in harmony, as Leibniz suggested; but then there seems to be little point in talking about the second world, as it explains nothing that happens and none of what we do or say. This is quite implausible to me, too, if we’re thinking particularly of subjective experience or qualia. When I am looking at a red apple, it seems to me that every bit of my subjective experience of the colour might influence my decision about whether to pick up the apple or not. Nothing in my mental world seems to be sealed off from my behaviour.

If we think there is causal interaction, then again we seem to be looking for an extension of monist physics rather than a dualism.

Yet it won’t quite do, will it, to say that the physics is all there is to it?

My view is that in fact what’s going on is that we are addressing a question which physics cannot explain, not because physics is faulty or inadequate, but because the question is outside its scope. In terms of physics, we’ve got a type 3 problem; in terms of metaphysics, I hope it’s type 2, though there are some rather discouraging arguments that suggest things are worse than that.

I think the element of mystery in conscious experience is in fact its particularity, its actual reality. All the general features can be explained at a theoretical level by physics, but not why this specific experience is real and being had by me. This is part of a more general mystery of reality, including the questions of why the world is like this in particular and not like something else, or like nothing. We try to naturalise these questions, typically by suggesting that reality is essentially historical, that things are like this because they were previously like that, so that the ultimate explanations lie in the origin of the cosmos, but I don’t think that strategy works very well.

There only seem to be two styles of explanation available here. One is the purely rational kind of reasoning you get in maths. The other is empirical observation. Neither is any good in this context; empirical explanations simply defer the issue backwards by explaining things as they are in terms of things as they once were. There’s no end to that deferral. A priori logical reasoning, on the other hand, delivers only eternal truths, whereas the whole point about reality and my experience is that it isn’t fixed and eternal; it could have been otherwise. People like Stephen Hawking try to deploy both methods, using empirical science to defer the ultimate answer back in time to a misty primordial period, a hypothetical land created by heroic backward extrapolation, where it is somehow meant to turn into a mathematical issue, but even if you could make that work I think it would be unsatisfying as an explanation of the nature of my experience here and now.

I conclude that to deal with this properly we really need a different way of thinking. I fear it might be that all we can do is contemplate the matter and hope pre- or post-theoretical enlightenment dawns, in a sort of Taoist way; but I continue to hope that eventually that one weird trick of metaphysical argument that cracks the issue will occur to someone, because like anyone brought up in the western tradition I really want to get it all back to territory where we can write out the rules and even do some maths!

As I’ve said, this all raises another question, namely why we bother about monism versus dualism at all. Most people realise that there is no single account of the world that covers everything. Besides concrete physical objects we have to consider the abstract entities; those dealt with in maths, for example, and many other fields. Any system of metaphysics which isn’t intolerably flat and limited is going to have some features that would entitle us to call it at least loosely dualist. On the other hand, everything is part of the cosmos, broadly understood, and everything is in some way related to the other contents of those cosmos. So we can equally say that any sufficiently comprehensive system can, at least loosely, be described as monist too; in the end there is only one world. Any reasonable theory will be a bit dualist and a bit monist in some respects.

That being so, the pure metaphysical question of monism versus dualism begins to look rather academic, more about nomenclature than substance. The real interest is in whether your dualism or your monism is any good as an elegant and effective explanation. In that competition materialism, which we tend to call monist, just looks to be an awfully long way ahead.

The Map of Feelings

An intriguing study by Nummenmaa et al (paper here) offers us a new map of human feelings, which it groups into five main areas; positive emotions, negative emotions, cognitive operations, homeostatic functions, and sensations of illness. The hundred feelings used to map the territory are all associated with physical regions of the human body.

The map itself is mostly rather interesting and the five groups seem to make broad sense, though a superficial look also reveals a few apparent oddities. ‘Wanting’ here is close to ‘orgasm’. For some years now I’ve wanted to clarify the nature of consciousness; writing this blog has been fun, but dear reader, never  quite like that. I suppose ‘wanting’ is being read as mainly a matter of biological appetites, but the desire and its fulfilment still seem pretty distinct to me, even on that reading.

Generally, a list of methodological worries come to mind, many of which are connected with the notorious difficulties of introspective research. ‘Feelings’ is a rather vaguely inclusive word, to begin with. There are a number of different approaches to classifying the emotions already available, but I have not previously encountered an attempt to go wider for a comprehensive coverage of every kind of feeling. It seems natural to worry that ‘feelings’ in this broad sense might in fact be a heterogeneous grouping, more like several distinct areas bolted together by an accident of language; it certainly feels strange to see thinking and urination, say, presented as members of the same extended family. But why not?

The research seems to rest mainly on responses from a group of more than 1000 subjects, though the paper also mentions drawing on the NeuroSynth meta-analysis database in order to look at neural similarity. The study imported some assumptions by using a list of 100 feelings, and by using four hypothesized basic dimensions – mental experience, bodily sensation, emotion, and controllability. It’s possible that some of the final structure of the map reflects these assumptions to a degree. But it’s legitimate to put forward hypotheses, and that perhaps need not worry us too much so long as the results seem consistent and illuminating. I’m a little less comfortable with the notion here of ‘similarity’; subjects were asked to put feelings closer the more similar they felt them to be, in two dimensions. I suspect that similarity could be read in various ways, and the results might be very vulnerable to priming and contextual effects.

Probably the least palatable aspect, though, is the part of the study relating feelings to body regions. Respondents were asked to say where they felt each of the feelings, with ‘nowhere’, ‘out there’ or ‘in Platonic space’ not being admissible responses. No surprises about where urination was felt, nor, I suppose, about the fact that the cognitive stuff was considered to be all in the head. But the idea that thinking is simply a brain function is philosophically controversial, under attack from, among others, those who say ‘meaning ain’t in the head’, those who champion the extended brain (if you’re counting on your fingers, are you still thinking with just your brain?), those who warn us against the ‘mereological fallacy’, and those like our old friend Riccardo Manzotti, who keeps trying to get us to understand that consciousness is external.

Of course it depends what kind of claim these results might be intended to ground. As a study of ‘folk’ psychology, they would be unobjectionable, but we are bound to suspect that they might be called in support of a reductive theory. The reductive idea that feelings are ultimately nothing more than bodily sensations is a respectable one with a pedigree going back to William James and beyond; but in the context of claims like that a study that simply asks subjects to mark up on a diagram of the body where feelings happen is begging some questions.

Rosehip Neurons of Consciousness

A new type of neuron is a remarkable discovery; finding one in the human cortex makes it particularly interesting, and the further fact that it cannot be found in mouse brains and might well turn out to be uniquely human – that is legitimately amazing. A paper (preprint here) in Nature Neuroscience announces the discovery of ‘rosehipneurons , named for their large, “rosehip”-like axonal boutons.

There has already been some speculation that rosehip neurons might have a special role in distinctive aspects of human cognition, especially human consciousness, but at this stage no-one really has much idea. Rosehip neurons are inhibitory, but inhibiting other neurons is often a key function which could easily play a big role in consciousness. Most of the traffic between the two hemispheres of the human brain is inhibitory, for example, possibly a matter of the right hemisphere, with its broader view, regularly ‘waking up’ the left out of excessively focused activity.

We probably shouldn’t, at any rate, expect an immediate explanatory breakthrough. One comparison which may help to set the context is the case of spindle neurons. First identified in 1929, these are a notable feature of the human cortex and at first appeared to occur only in the great apes – they, or closely analogous neurons, have since been spotted in a few other animals with large brains, such as elephants and dolphins. I believe we still really don’t know why they’re there or what their exact role is, though a good guess seems to be that it might be something to do with making larger brains work efficiently.

Another warning against over-optimism might come from remembering the immense excitement about mirror neurons some years ago. Their response to a given activity both when performed by the subject and when observed being performed by others, seemed to some to hold out a possible key to empathy, theory of mind, and even more. Alas, to date that hope hasn’t come to anything much, and in retrospect it looks as if rather too much significance was read into the discovery.

The distinctive presence of rosehip neurons is definitely a blow to the usefulness of rodents as experimental animals for the exploration of the human brain, and it’s another item to add to the list of things that brain simulators probably ought to be taking into account, if only we could work out how. That touches on what might be the most basic explanatory difficulty here, namely that you cannot work out the significance of a new component in a machine whose workings you don’t really understand to begin with.

There might indeed be a deeper suspicion that a new kind of neuron is simply the wrong kind of thing to explain consciousness. We’ve learnt in recent years that the complexity of a single neuron is very much not to be under-rated; they are certainly more than the simple switching devices they have at times been portrayed as, and they may carry out quite complex processing. But even so, there is surely a limit to how much clarification of phenomenology we can expect a single cell to yield, in the absence of the kind of wider functional theory we still don’t really have.

Yet what better pointer to such a  wider functional theory could we have than an item unique to humans with a role which we can hope to clarify through empirical investigation? Reverse engineering is a tricky skill, but if we can ask ourselves the right questions maybe that longed-for ‘Aha!’ moment is coming closer after all?

 

Meh-bots

Do robots care? Aeon has an edited version of the inaugural Margaret Boden Lecture, delivered by Boden herself. You can see the full lecture above. Among other things, she tells us that the robots are not going to take over because they don’t care. No computer has actual motives, the way human beings do, and they are indifferent to what happens (if we can even speak of indifference in a case where no desire or aversion is possible).

No doubt Boden is right; it’s surely true at least that no current computer has anything that’s really the same as human motivation. For me, though, she doesn’t provide a convincing account of why human motives are special, and why computers can’t have them, and perhaps doesn’t sufficiently engage with the possibility that robots might take over the world (or at least, do various bad out-of-control things) without having human motives, or caring what happens in the fullest sense. We know already that learning systems set goals by humans are prone to finding cheats or expedients never envisaged by the people who set up the task; while it seems a bit of a stretch to suppose that a supercomputer might enslave all humanity in pursuit of its goal of filling the world with paperclips (about which, however, it doesn’t really care), it seems quite possible real systems might do some dangerous things. Might a self-driving car (have things gone a bit quiet on that front, by the way?) decide that its built-in goal of not colliding with other vehicles can be pursued effectively by forcing everyone else off the road?

What is the ultimate source of human motivation? There are two plausible candidates that Boden doesn’t mention. One is qualia; I think John Searle might say, for example, that it’s things like the quake of hunger, how hungriness really feels, that are the roots of human desire. That nicely explains why computers can’t have them, but for me the old dilemma looms. If qualia are part of the causal account, then they must be naturalisable and in principle available to machines. If they aren’t part of the causal story, how do they influence human behaviour?

Less philosophically, many people would trace human motives to the evolutionary imperatives of survival and reproduction. There must be some truth in that, but isn’t there also something special about human motivation, something detached from the struggle to live?

Boden seems to rest largely on social factors, which computers, as non-social beings, cannot share in. No doubt social factors are highly important in shaping and transmitting motivation, but what about Baby Crusoe, who somehow grew up with no social contact? His mental state may be odd, but would we say he has no more motives than a computer? Then again, why can’t computers be social, either by interacting with each other, or by joining in human society? It seems they might talk to human beings, and if we disallow that as not really social, we are in clear danger of begging the question.

For me the special, detached quality of human motivation arises from our capacity to imagine and foresee. We can randomly or speculatively envisage future states, decide we like or detest them, and plot a course accordingly, coming up with motives that don’t grow out of current circumstances. That capacity depends on the intentionality or aboutness of consciousness, which computers entirely lack – at least for now.

But that isn’t quite what Boden is talking about, I think; she means something in our emotional nature. That – human emotions – is a deep and difficult matter on which much might be said; but at the moment I can’t really be bothered…