TegmarkEarlier this year Tononi’s Integrated Information Theory (IIT) gained a prestigious supporter in Max Tegmark, professor of Physics at MIT. The boost for the theory came not just from Tegmark’s prestige, however; there was also a suggestion that the IIT dovetailed neatly with some deep problems of physics, providing a neat possible solution and the kind of bridge between neuroscience, physics and consciousness that we could hardly have dared to hope for.

Tegmark’s paper presents the idea rather strangely, suggesting that consciousness might be another state of matter like the states of being a gas, a liquid, or solid.  That surely can’t be true in any simple literal sense because those particular states are normally considered to be mutually exclusive: becoming a gas means ceasing to be a liquid. If consciousness were another member of that exclusive set it would mean that becoming conscious involved ceasing to be solid (or liquid, or gas), which is strange indeed. Moreover Tegmark goes on to name the new state ‘perceptronium’ as if it were a new element. He clearly means something slightly different, although the misleading claim perhaps garners him sensational headlines which wouldn’t be available if he were merely saying that consciousness arose from certain kinds of subtle informational organisation, which is closer to what he really means.

A better analogy might be the many different forms carbon can take according to the arrangement of its atoms: graphite, diamond, charcoal, graphene, and so on; it can have quite different physical properties without ceasing to be carbon. Tegmark is drawing on the idea of computronium proposed by Toffoli and Margolus. Computronium is a hypothetical substance whose atoms are arranged in such a way that it consists of many tiny modules capable of performing computations.  There is, I think, a bit of a hierarchy going on here: we start by thinking about the ability of substances to contain information; the ability of a particular atomic matrix to encode binary information is a relatively rigorous and unproblematic idea in information theory. Computronium is a big step up from that: we’re no longer thinking about a material’s ability to encode binary digits, but the far more complex functional property of adequately instantiating a universal Turing machine. There are an awful lot of problems packed into that ‘adequately’.

The leap from information to computation is as nothing, however, compared to the leap apparently required to go from computronium to perceptronium. Perceptronium embodies the property of consciousness, which may not be computational at all and of which there is no agreed definition. To say that raises a few problems is rather an understatement.

Aha! But this is surely where the IIT comes in. If Tononi is right, then there is in fact a hard-edged definition of consciousness available: it’s simply integrated information, and we can even say that the quantity required is Phi. We can detect it and measure it and if we do, perceptronium becomes mathematically tractable and clearly defined. I suppose if we were curmudgeons we might say that this is actually a hit against the IIT: if it makes something as absurd as perceptronium a possibility, there must be something pretty wrong with it. We’re surely not that curmudgeonly, but there is something oddly non-dynamic here. We think of consciousness, surely, as a process, a  function: but it seems we might integrate quite a lot of information and simply have it sit there as perceptronium in crystalline stillness; the theory says it would be conscious, but it wouldn’t do anything.  We could get round that by embracing the possibility of static conscious states, like one frame out of the movie film of experience; but Tegmark, if I understand him right, adds another requirement for consciousness: autonomy, which requires both dynamics and independence; so there has to be active information processing, and it has to be isolated from outside influence, much the way we typically think of computation.

The really exciting part, however,  is the potential linkage with deep cosmological problems – in particular the quantum factorisation problem. This is way beyond my understanding, and the pages of equations Tegmark offers are no help, but the gist appears to be that  quantum mechanics offers us a range of possible universes.  If we want to get ‘physics from scratch’, all we have to work with is, in Tegmark’s words,

two Hermitian matrices, the density matrix p encoding the state of our world and the Hamiltonian H determining its time-evolution…

Please don’t ask me to explain; the point is that the three things don’t pin down a single universe; there are an infinite number of acceptable solutions to the equations. If we want to know why we’ve got the universe we have – and in particular why we’ve got classical physics, more or less, and a world with an object hierarchy – we need something more. Very briefly, I take Tegmark’s suggestion to be that consciousness, with its property of autonomy, tends naturally to pick out versions of the universe in which there are similarly integrated and independent entities – in other words the kind of object-hierarchical world we do in fact see around us. To put it another way and rather baldly, the universe looks like this because it’s the only kind of universe which is compatible with the existence of conscious entities capable of perceiving it.

That’s some pretty neat footwork, although frankly I have to let Tegmark take the steering wheel through the physics and in at least one place I felt a little nervous about his driving. It’s not a key point, but consider this passage:

Indeed, Penrose and others have speculated that gravity is crucial for a proper understanding of quantum mechanics even on small scales relevant to brains and laboratory experiments, and that it causes non-unitary wavefunction collapse. Yet the Occam’s razor approach is clearly the commonly held view that neither relativistic, gravitational nor non-unitary effects are central to understanding consciousness or how conscious observers perceive their immediate surroundings: astronauts appear to still perceive themselves in a semi-classical 3D space even when they are eff ectively in a zero-gravity environment, seemingly independently of relativistic e ffects, Planck-scale spacetime fluctuations, black hole evaporation, cosmic expansion of astronomically distant regions, etc

Yeah… no. It’s not really possible that a professor of physics at MIT thinks that astronauts float around their capsules because the force of gravity is literally absent, is it? That kind of  ‘zero g’ is just an effect of being in orbit. Penrose definitely wasn’t talking about the gravitational effects of the Earth, by the way; he explicitly suggests an imaginary location at the centre of the Earth so that they can be ruled out. But I must surely be misunderstanding.

So far as consciousness is concerned, the appeal of Tegmark’s views will naturally be tied to whether one finds the IIT attractive, though they surely add a bit of weight to that idea. So far as quantum factorisation is concerned, I think he could have his result without the IIT if he wanted: although the IIT makes it particularly neat, it’s more the concept of autonomy he relies on, and that would very likely still be available even if our view of consciousness were ultimately somewhat different. The linkage with cosmological metaphysics is certainly appealing, essentially a sensible version of the Anthropic Principle which Stephen Hawking for one has been prepared to invoke in a much less attractive form

grazianoYes: I feel pretty sure that anyone reading this is indeed conscious. However, the NYT recently ran a short piece from Michael S. A. Graziano which apparently questioned it. A fuller account of his thinking is in this paper from 2011; the same ideas were developed at greater length in his book Consciousness and the Social Brain

I think the startling headline on the NYT piece misrepresents Graziano somewhat. The core of his theory is that awareness is in some sense a delusion, the reality of which is simple attention. We have ways of recognising the attention of other organisms, and what it is fixed on (the practical value of that skill in environments where human beings may be either hunters or hunted is obvious): awareness is just our garbled version of attention. he offers the analogy of colour. The reality out there is different wavelengths of light: colour, our version of that, is a slightly messed-up, neatened, artificial version which is nevertheless very vivid to us in spite of being artificial to a remarkably large extent.

I don’t think Graziano is even denying that awareness exists, in some sense: as a phenomenon of some kind it surely does: what he means is more that it isn’t veridical: what it tells us about itself, and what it tells us about attention, isn’t really true. As he acknowledges in the paper, there are labelling issues here, and I believe it would be possible to agree with the substance of what he says while recasting it in terms that look superficially much more conventional.

Another labelling issue may lurk around the concept of attention. On some accounts, it actually presupposes consciousness: to direct one’s attention towards something is precisely to bring it to the centre of your consciousness. That clearly isn’t what Graziano means: he has in mind a much more basic meaning. Attention for him is something simple like having your sensory organs locked on to a particular target. This needs to be clear and unambiguous, because otherwise we can immediately see potential problems over having to concede that cameras or other simple machines are capable of attention; but I’m happy to concede that we could probably put together some kind of criterion, perhaps neurological, that would fit the bill well enough and give Graziano the unproblematic materialist conception of attention that he needs.

All that looks reasonably OK as applied to other people, but Graziano wants the same system to supply our own mistaken impression of awareness. Just as we track the attention of others with the false surrogate of awareness, we pick up our own attentive states and make the same kind of mistake. This seems odd: when I sense my own awareness of something, it doesn’t feel like a deduction I’ve made from objective evidence about my own behaviour: I just sense it.  I think Graziano actually wants it to be like that for other people too. He isn’t talking about rational, Sherlock Holmes style reasoning about the awareness of other people, he has in mind something like a deep, old, lizard-brain kind of thing; like the sense of somebody there that makes the hairs rise on the back of the neck  and your eyes quickly saccade towards the presumed person.

That is quite a useful insight, because what Graziano is concerned to deny is the reality of subjective experience, of qualia, in a word. To do so he needs to be able to explain why awareness seems so special when the reality is nothing more than information processing. I think this remains a weak spot in the theory, but the idea that it comes from a very basic system whose whole function is to generate a feeling of ‘something there’ helps quite a bit, and is at least moderately compatible with my own intuitions and introspections.What Graziano really relies on is the suggestion that awareness is a second-order matter: it’s a cognitive state about other cognitive states, something we attribute to ourselves and not, as it seems to be, directly about the real world. It just happens to be a somewhat mistaken cognitive state.

That still leaves us in some difficulty over the difference between me and other people. If my sense of my own awareness is generated in exactly the same way as my sense of the awareness of others, it ought to seem equally distant  – but it doesn’t, it seems markedly more present and far less deniable.

More fundamentally, I still don’t really see why my attention should be misperceived. In the case of colours, the misrepresentation of reality comes from two sources, I think. One is the inadequacy of our eyes; our brain has to make do with very limited data on colour (and on distance and other factors) and so has to do things like hypothesising yellow light where it should be recognising both red and green, for example. Second, the brain wants to make it simple for us and so tries desperately to ensure that the same objects always look the same colour, although the wavelengths being received actually vary according to conditions. I find it hard to see what comparable difficulties affect our perception of attention. Why doesn’t it just seem like attention? Graziano’s view of it as a second-order matter explains how it can be wrong about attention, but not really why.

So I think the theory is less radical than it seems, and doesn’t quite nail the matter on some important points: but it does make certain kinds of sense and at the very least helps keep us roused from our dogmatic slumbers. Here’s a wild thought inspired (but certainly not endorsed) by Graziano. Suppose our sense of qualia really does come from a kind of primitive attention-detecting module. It detects our own attention and supplies that qualic feel, but since it also (in fact primarily) detects other people’s attention, should it not also provide a bit of a qualic feel for other people too? Normally when we think of our beliefs about other people, we remain in the explicit, higher realms of cognition: but what if we stay at a sort of visceral level, what if we stick with that hair-on-the back-of the-neck sensation? Could it be that now and then we get a whiff of other people’s qualia? Surely too heterodox an idea to contemplate…

gapWe’re often told that when facing philosophical problems, we should try to ‘carve them at the joints’. The biggest joint on offer in the case of consciousness has seemed to be the ‘explanatory gap’ between the physical activity of neurons and the subjective experience of consciousness. Now, in the latest JCS, Reggia, Monner, and Sylvester suggest that there is another gap, and one where our attention should rightly be focussed.

They suggest that while the simulation of certain cognitive processes has proceeded quite well, the project of actually instantiating consciousness computationally has essentially got nowhere.  That project, as they say, is affected by a variety of problems about defining and recognising success. But the real problem lies in an unrecognised issue: the computational explanatory gap. Whereas the explanatory gap we’re used to is between mind and brain, the computational gap is between high-level computational algorithms and low-level neuronal activity. At the high level, working top-down, we’ve done relatively well in elucidating how certain kinds of problem-solving, goal-directed kinds of computation work, and been able to simulate them relatively effectively.  At the neuronal, bottom-up level we’ve been able to explain certain kinds of pattern-recognition and routine learning systems. The two different kinds of processing have complementary strengths and weaknesses, but at the moment we have no clear idea of how one is built out of the other. This is the computational explanatory gap.

In philosophy, the authors plausibly claim, this important gap has been overlooked because in philosophical terms these are all ‘easy problem’ matters, and so tend to be dismissed as essentially similar matters of detail. In psychology, by contrast, the gap is salient but not clearly recognised as such: the lower-level processes correspond well to those identified as sub-conscious, while the higher-level ones match up with the reportable processes generally identified as conscious.

If Reggia, Monner and Sylvester are right, the well-established quest for the neural correlates of consciousness has been all very well, but what we really need is to bridge the gap by finding the computational correlates of consciousness. As a project, bridging this gap looks relatively promising, because it is all computational. We do not need to address any spooky phenomenology, we do not need to wonder how to express ‘what it is like’, or deal with anything ineffable; we just need to find the read-across between neural networking and the high-level algorithms which we can sort of see in operation. That may not be easy, but compared to the Hard Problem it sounds quite tractable. If solved, it will deliver a coherent account right from neural activity through to high-level decision making.

Of course, that leaves us with the Hard Problem unsolved, but the authors are optimistic that success might still banish the problem. They draw an analogy with artificial life: once it seemed obvious that there was a mysterious quality of being alive, and it was unclear how simple chemistry and physics could ever account for it. That problem of life has never been solved in terms, but as our understanding of the immensely subtle chemistry of living things has improved, it has gradually come to seem les and less obvious that it is really a problem. In a similar way the authors hope that if the computational explanatory gap can be bridged, so that we gradually develop a full account of cognitive processes from the ground-level firing of neurons up to high-level conscious problem-solving, the Hard Problem will gradually cease to seem like something we need to worry about.

That is optimistic, but not unreasonably so, and I think the new perspective offered is a very interesting and plausible one. I’m convinced that the gap exists and that it needs to be bridged: but I’m less sure that it can easily be done.  It might be that Reggia, Monner, and Sylvester are affected in a different way by the same kind of outlook they criticise in philosophers: these are all computational problems, so they’re all tractable. I’m not sure how we can best address the gap, and I suspect it’s there not just because people have failed to recognise it, but because it is also genuinely difficult to deal with.

For one thing, the authors tend to assume the problem is computational. It’s not clear that computation is of the essence here. The low-level processes at a neuronal level don’t look to be based on running any algorithm – that’s part of the nature of the gap. High-level processes may be capable of simulation algorithmically, but that doesn’t mean that’s the way the brain actually does it. Take the example of catching a ball – how do we get to the right place to intercept a ball flying through the air?  One way to do this would be some complex calculations about perspective and vectors: the brain could abstract the data, do the sums, and send back the instructions that result. We could simulate that process in a computer quite well. But we know – I think – that that isn’t the way it’s actually done: the brain uses a simpler and quicker process which never involves abstract calculation, but is based on straight matching of two inputs; a process which incidentally corresponds to a sub-optimal algorithm, but one that is good enough in practice. We just run forward if the elevation of the ball is reducing and back if it’s increasing. Fielders are incapable of predicting where a ball is going, but they can run towards the spot in such a way as to be there when the ball arrives.  It might be that all the ‘higher-level’ processes are like this, and that an attempt to match up with ideally-modelled algorithms is therefore categorically off-track.

Even if those doubts are right, however, it doesn’t mean that the proposed re-framing of the investigation is wrong or unhelpful, and in fact I’m inclined to think it’s a very useful new perspective.

 

pixelated eyeYou’ve heard of splitting the atom: W. Alex Escobar wants to split the quale. His recent paper (short article here) proposes that in order to understand subjective experience we may need to break it down into millions of tiny units of experience.  He proposes a neurological model which to my naive eyes seems reasonable: the extraordinary part is really the phenomenology.

Like a lot of qualia theorists Escobar seems to have based his view squarely on visual experience, and the idea of micro-qualia is perhaps inspired by the idea of pixels in digitised images, or other analytical image-handling techniques.  Why would the idea help explain qualia?

I don’t think Escobar explains this very directly, at least from a philosophical point of view, but you can see why the idea might appeal to some people. Panexperientialists, for example, take the view that there are tiny bits of experience everywhere, so the idea that our minds assemble complex experiences out of micro-qualia might be quite congenial to them.  As we know, Christof Koch says that consciousness arises from the integration of information, so perhaps he would see Escobar’s theory as offering a potentially reasonable phenomenal view of the same process.

Unfortunately Escobar has taken a wrong turning, as others have done before, and isn’t really talking about ineffable qualia at all: instead, we might say he is merely effing the effable.

Ineffability, the quality of being inexpressible, is a defining characteristic of qualia as canonically understood in the philosophical literature. I cannot express to you what redness is like to me; if I could, you would be able to tell whether it was the same as your experience. If qualia could be expressed, my zombie twin  (who has none) would presumably become aware of their absence; when asked what it was like to see red, he would look puzzled and admit he didn’t really know, whereas ex hypothesi he gives the same fluent and lucidly illuminating answers that I do – in spite of not having the things we’re both talking about.

Qualia, in fact, have no causal effects and cannot be part of the scientific story. That doesn’t mean Escobar’s science is wrong or uninteresting, just that what he’s calling qualia aren’t really the philosophically slippery items of experience we keep chasing in vain in our quest for consciousness.

Alright, but setting that aside, is it possible that real qualia could be made up of many micro-qualia? No, it absolutely isn’t! In physics, a table can seem to be a single thing but actually be millions of molecules.  Similarly, what looks like a flat expanse of uniform colour on a screen may actually be thousands of pixels. But qualia are units of experience; what they seem like is what they are. They don’t seem like a cloud of micro-qualia, and so they aren’t. Now there could be some neuronal or psychological story going on at a lower level which did involve micro units; but that wouldn’t make qualia themselves splittable. What they are like is all there is to them; they can’t have a hidden nature.

Alas, Escobar could not have noticed that, because he was too busy effing the Effable.

Sam HarrisI must admit to not being very familiar with Sam Harris’ work: to me he’s been primarily a member of the Four Horsemen of New Atheism: Dawkins, Dennett, Hitchens and that other one…  However in the video here he expresses a couple of interesting views, one about the irreducible subjectivity of consciousness, the other about the illusory nature of the self. His most recent book – first chapter here – apparently seeks to reinterpret spirituality for atheists; he seems basically to be a rationalist Buddhist (there is of course no contradiction involved in becoming a Buddhist while remaining an atheist).

It’s a slight surprise to find an atheist champion who does not want to do away with subjectivity. Harris accepts that there is an interior subjective experience which cannot simply be reduced to its objective, material correlates: he likens the two aspects to two faces of a coin. If you like, you can restrict yourself to talking about one face of the coin, but you can’t go on to say that the other doesn’t really exist, or that features of the heads side are really just features of the tails side seen from another angle.  So far as it goes, this is all very sensible, and I think the majority of people would go along with it quite a way. What’s a little odd is that Harris seems content to rest there: it’s just a feature of the world that it has these two aspects, end of story. Most of us still want some explanation; if not a reduction then at least some metaphysics which allows us to go on calling ourselves monists in a respectable manner.

Harris’ second point is also one that many others would agree with, but not me. The self, he says, is an illusion: there is no consistent core which amounts to a self. In these arguments I feel the sceptics are often guilty of knocking down straw men: they set up a ridiculous version of the self and demolish that without considering more reasonable ideas. So, they deny that that there is any specific part of the brain that can be identified with the self, they deny the existence of a Cartesian Theatre, or they deny any unchanging core. But whoever said the self had to be unchanging or simple?

Actually, we can give a pretty good account of the self without ever going beyond common sense. Human beings are animals, which means I’m a big live lump of meat which has a recognisable identity at the simple physical and biological level: to deny that takes a more radical kind of metaphysical scepticism than most would be willing to go in for.  The behaviour of that ape-like lump of meat is also governed by a reasonably consistent set of memories and beliefs. That’s all we need for a self, no mystery here, folks, move along please.

Now of course my memories and my beliefs change, as does the size and shape of the beast they inhabit. At 56 I’m not the same as I was at 6.  But so what? Rivers, as Heraclitus reminds us, never contain exactly the same water at two different moments: they rise and fall, they meander and change course. We don’t have big difficulties over believing in rivers, though, or even in the Nile or the Amazon in particular. There may be doubts about what to treat as the true origin of the Nile, but people don’t go round saying it’s an illusion (unless they’ve gone in for some of that more radical metaphysics). On this, I think Dennett’s conception of the self as a centre of narrative gravity is closer to the truth than most, though he has, unfairly, I think, been criticised for equivocating over its reality.

Frequently what people really want to deny is not the self so much as the soul. Often they also want to deny that there is a special inward dimension: but as we’ve seen, Harris affirms that. He seems instead almost to be denying the qualic sense of self I suggested a while back. Curiously, he thinks that we can in fact, overcome the illusion of selfhood: in certain exalted states he thinks we can transcend ourselves and see the world as it really is.

This is strange, because you would expect the illusion of self to stem from something fundamental about conscious experience (some terrible bottleneck, some inherent limitation), not from small, adjustable details of chemistry. Can selfhood really be a mental disease caused by an ayahuasca deficiency? Harris asserts that in these exalted states we’re seeing the world as it truly is, but wouldn’t that be the best state for us to stay in? You’d think we would have evolved that way if seeing reality just needed some small tweaks to the brain.

It does look to me as if Harris’ thinking has been conditioned a little too much by Buddhism.  He speaks with great respect of the rational basis of Buddhism, pointing out that it requires you to believe its tenets merely because they can be shown to be true, whereas Christianity seems to require as an arbitrarily specified condition of salvation your belief in things that are acknowledged to be incredible. I have a lot of sympathy for that point of view; but the snag is that if you rely on reasoning your reasoning has to be watertight: and, at the risk of giving offence, Buddhism’s isn’t.

Buddhism tells us that the world is in constant change; that change inevitably involves pain, and that to avoid the pain we should avoid the world. As it happens, it adds, the mutable world and the selves we think inhabit it are mere illusions, so if we can dispel those illusions we’re good.

But that’s a bit of a one-sided outlook, surely? Change can also involve pleasure, and in most of our lives there’s probably a neutral to positive balance; so surely it makes more sense to engage and try to improve that balance than opt out? Moreover, should we seek to avoid pain? Perhaps we ought to endure it, or even seek it out? Of course, people do avoid pain, but why should we give priority to the human aversion to pain and ignore the equally strong human aversion to non-existence? And what does it mean to call the whole world an illusion: isn’t an illusion precisely something that isn’t really part of the world? Everything we see is smoke and mirrors, very well, but aren’t smoke and mirrors (and indeed, tears, as Virgil reminds us) things?

A sceptical friend once suggested to me that all the major religions were made up by frustrated old men, often monks: the one thing they all agree on is that being a contemplative like them is just so much more worthwhile than you’d think at first sight, and that the cheerful ordinary life they missed out on is really completely worthless if not sinful. That’s not altogether accurate – Taoism, for example, praises ordinary life (albeit with a bit of a smirk on its lips);  but it does seem to me that Buddhism is keener to be done with the world than reason alone can justify.

It must be said that Harris’ point of view is refreshingly sophisticated and nuanced in comparison to the Dawkinsian weltanschauung; he seems to have the rare but valuable ability to apply his critical faculties to his own beliefs as well as other people’s. I really should read some of his books.

 

Bishop BrassneckThis piece (via MLU)notes how a robot is giving lectures in theology – or perhaps it would be more accurate to say that it’s being used as a prop for some theology lectures. It helps dramatise certain human issues, either the ‘strong’ ones about it lacking the immortal soul human beings are taken to have in Christian thought, or some ‘weak’ ones about more general ethical issues.

Nothing wrong with that; in fact I’ve heard it argued that all thinking robots would be theists, because to them it would seem obvious, almost self-evident, that conscious entities need a creator. No doubt D.A.V.I.D helps to raise interest, but he doesn’t seem half as provocative as the Jesus automaton described here; not a modern robot but a feature of the medieval church robot scene, apparently a far livelier business than we could ever have guessed.

It’s certainly true that those old automata had a deep impact on Western thought about the mind. Descartes describes hydraulic ones, and it’s clear that they helped form his idea of the human body as a mere machine. The study of anatomy was backing this up – Leonardo da Vinci, for example, had already concluded on the basis of anatomy alone that the brain was the centre from which the body was controlled. Together these two influences banished older ideas of volition acting throughout the body, with your arm moving because you just wanted it to, impelled by your unintermediated volition. These days, of course, some actually think we have gone too far with our brain-centrism, and need to bring in ideas of embodiment and mind extension; but rightly or wrongly the automata undoubtedly changed our minds dramatically.

The same kind of thing happened when effective computers came on the scene. Before then it had seemed obvious that though the body might be a machine, the mind categorically was not; now there was a persuasive case for thinking our minds as well as our bodies might be machines, and I think our idea of consciousness has been reshaped gradually since so that it can fill the role of ‘the thing machines can’t do’ for those who think there is such a thing.

It might be that this has distorted our way of looking at consciousness, which never occupied an important place in ancient thought, and does not really feature in the same way in non-western traditions (at least so far as I can tell). So perhaps robots shouldn’t be teaching us about the mind. On the other hand, they sometimes come up with interesting stuff. Dennett’s discussion of the frame problem is a nice example. Most people take the frame problem – in essence, dealing with all the small background details of real-world  situations which multiply indefinitely, are probably irrelevant, but might just come back to bite you – as a problem for AI: but Dennett thoughtfully suggested that it was in fact a problem for all forms of intelligence. It was just that the human brain dealt with it so smoothly we’d never noticed it before: but to explain how the brain dealt with it was at least as problematic as building a robot that could handle it. In this way the robots had given us a new insight into human cognition.  So perhaps we should listen to them?

intuitronMark O’Brien gives a good statement of the computationalist case here; clear, brief, and commendably mild and dispassionate. My impression is that enthusiasm for computationalism – approximately, the belief that human thought is essentially computational in nature – is not what it was. It’s not that computationalists lost the argument, it’s more that the robots failed to come through. What AI research delivered has so far been, in this respect, much less than the optimists had hoped.

Anyway O’Brien’s case rests on two assumptions:

  • Naturalism is true.
  • The laws of physics are computable, that is, any physical process can be simulated to any desired degree of precision by a computer.

It’s immediately clear where he’s going. T0 represent it crudely, the intuition here is that naturalism means the world ultimately consists of  physical processes, any physical process can run on a computer, ergo anything in the world can run on a computer, ergo it must be possible to run consciousness on a computer.

There’s an awful lot packed into those two assumptions. O’Brien tackles one issue with the idea of simulation: namely that simulating something isn’t doing it for real. A simulated rainstorm doesn’t make us wet. His answer is that simulation doesn’t produce physical realities, but it does seem to work for abstract things. I think this is basically right. If we simulate a flight to Paris, we don’t end up there; but the route calculated by the program is the actual route; it makes no sense to say it’s only a simulated route, because it’s actually identical with the one we should use if we really went to Paris. So the power of simulation is greater for informational entities than for physical ones, and it’s not unreasonable to suggest that consciousness seems more like a matter of information than of material stuff.

There’s a deeper point, though. To simulate is not to reproduce: a simulation is the reproduction of the relevant aspects of the thing simulated. It’s implied that some features of the thing simulated are left out, ones that don’t matter. That’s why we get the different results for our Parisian exercise: the simulation necessarily leaves our actual physical locations untouched; those are irrelevant when it comes to describing the route, but essential when it comes to actually visiting Paris.

The problem is we don’t know which properties are relevant to consciousness, and to assume they are the kind of thing handled by computation simply begs the question. It can’t be assumed without an argument that physical properties are irrelevant here: John Searle and Roger Penrose in different ways both assert that they are of the essence. Even if consciousness doesn’t rely quite so brutally as that on the physical nature of the brain, we need to start with a knowledge of how consciousness works. Otherwise, we can’t tell whether we’ve got the right properties in our simulation –  even if they are in principle computational.

I don’t myself think Searle or Penrose are right: but I think it’s quite likely that the causal relationships in cognitive processes are the kind of essential thing a simulation would have to incorporate. This is a serious problem because there are reasons to think computer simulations never reproduce the causal relationships intact. In my brain event A causes event B and that’s all there is to it: in a computer, there’s always a script involved. At its worst what we get is a program that holds up flag A to represent event A and then flag B to represent event B: but the causality is mediated through the program. It seems to me this might well be a real issue.

O’Brien tackles another of Searle’s arguments: that you can’t get semantics from syntax: ie, you can’t deal with meanings just by manipulating digits. O’Brien’s strategy here is to assume a robot that behaves pretty much the way I do: does it have beliefs? It says it does, and it behaves as if it did. Perhaps we’re not willing to concede that those are real beliefs: OK, let’s call them beliefs*. On examination it turns out that the differences between beliefs and beliefs* are nugatory: so on gorunds of parsimony if nothing else we should assume they are the same.

The snag here is that there are no robots that behave the way I do.  We’ve had sixty years of failure since Turing: you can’t just have it as an assumption that our robot pals are self-evidently achievable (alas).  We know that human beings, when they do translation for example, extract meanings and then put the meanings into other words, whereas the most successful translation programs avoid meanings altogether and simply swap text strings for text strings according to a kind of mighty look-up table.

That kind of strategy won’t work when dealing with the formless complexity of the real world: you run into the analogues of the Frame Problem or you just don’t get really started. It doesn’t even work that well for language: we know now that human understanding of language relies on pragmatic Gricean implicatures, and no-one can formalise those.

Finally O’Brien turns to qualia, and here I agree with him on the broad picture. He describes some of the severe difficulties around qualia and says, rightly I think, that in the end it comes down to competing intuitions.  All the arguments for qualia are essentially thought experiments: if we want, we can just say ‘no’ to all of them (as Dennett and the Churchlands, for example, do). O’Brien makes a kind of zombie argument: my zombie twin, who lacks qualia but resembles me in all other respects, would claim to have qualia and would talk about them just the way we do.  So the explanation for talk about qualia is not qualia themselves: given that, there’s no reason to think we ourselves have them.

Up to a point: but we get the conclusion that my zombie twin talks about qualia purely ex hypothesi: it’s just specified. It’s not an explanation, and I think that’s what we really need to be in a position to dismiss the strong introspective sense most people have that qualia exist. If we could actually explain what makes the Twin talk about qualia, we’d be in a much better position.

So I mostly disagree, but I salute O’Brien’s exposition, which is really helpful.

god3Sci provided some interesting links in his comment on the previous post, one a lecture by Raymond Tallis. Tallis offers some comfort to theists who have difficulty explaining how or why an eternal creator God should be making one-off interventions in the time-bound secular world he had created.  Tallis grants that’s a bad problem, but suggests atheists face an analogous one in working out how the eternal laws of physics relate to the local and particular world we actually live in.

These are interesting issues which bear on consciousness in at least two important ways, through human agency and the particularity of our experience; but today I want to leave the main road and run off down a dimly-lit alley that looks as if it contains some intriguing premises.

For the theists the problem is partly that God is omniscient and the creator of everything, so whatever happens, he should have foreseen it and arranged matters so that he does not need to intervene. An easy answer is that in fact his supposed interventions are actually part of how he set it up; they look like angry punishments or miraculous salvations to us, but if we could take a step back and see things from his point of view, we’d see it’s all part of the eternal plan, set up from the very beginning, and makes perfect sense. More worrying is the point that God is eternal and unchanging; if he doesn’t change he can’t be conscious.  I’ve mentioned before that our growing understanding of the brain, imperfect as it is, is making it harder to see how God could exist, and so making agnosticism a less comfortable position. We sort of know that human cognition depends on a physical process; how could an immaterial entity even get started? Instead of asking whether God exists, we’re getting to a place where we have to ask first how we can give any coherent account of what he could be – and it doesn’t look good, unless you’re content with a non-conscious God (not necessarily absurd) or a physical old man sitting on a cloud (which to be fair is probably how most Christians saw it until fairly recent times).

So God doesn’t change, and our developing understanding is redefining consciousness in ways that make an unchanging consciousness seem to involve a direct contradiction in terms. A changeless process? At this point I imagine an old gentleman dressed in black who has been sitting patiently in the corner, leaning forward with a kindly smile and pointing out that what we’re trying to do is understand the mind of God. No mere human being can do that, he says, so no wonder you’re getting into a muddle! This is the point where faith must take over.

Well, we don’t give up so easily; but perhaps he has a point; perhaps God has another and higher form of consciousness – metaconsciousness, let’s say – which resolves all these problems, but in ways we can never really understand.  Perhaps when the Singularity comes there will be robots who attain metaconsciousness, too: they may kindly try to explain it to us, but we’ll never really be able to get our heads round it.

Now of course, computers can already sail past us in terms of certain kinds of simple capacity: they can remember far more data much more precisely than we can, and they can work quickly through a very large number of alternatives. Even this makes a difference, I’ve mentioned before that exhaustive analysis by computers has shown that certain chess positions long considered draws are actually wins for one side: the winning tactics are just so long and complicated that human beings couldn’t see them, and can’t understand them intuitivel even when they see them played out.  But that’s not really any help; here we’re looking for something much more impressive. What we want to do is take the line which connects an early mammal’s level of cognition to ours and extend it until we’ve gone at least as far beyond the merely human. In facing up to this task, we’re rather like Flatlanders trying to understand the third dimension, or ordinary people trying to grasp the fourth: it isn’t really possible to get it intuitively, but we ought to be able to say some things about it by extrapolation.

So, early mammal – let’s call the beast Em (I don’t want to pick a real animal because that will derail us into consideration of how intelligent it really is) – works very largely on an instinctive or stimulus/resp0onse basis. It sees food, it pursues it and attempts to eat it. It lives in a world of concrete and immediate entities and has responses ready for some of them – fairly complex and somewhat variable responses, but fixed in the main. If we could somehow get Em to play chess with us, he would treat his men like a barbarian army, launching them towards us haphazardly en masse or one at a time, and we should have no trouble picking them off.

Human consciousness, by contrast, allows us to consider abstract entities (though we do not well understand their nature), to develop abstract general goals and to make plans and intentions which deal with future and possible events. These plans can also be of great complexity. We can even play out complicated long-range chess strategies if they’re not too complex.  This kind of thing allows us to do a better job than Em of getting food, though to Em a great deal of our food-related activity is completely opaque and apparently unmotivated. A lot of the time when we’re working on activities that will bring us food it will seem to Em as if we’re doing nothing, or at any rate, nothing at all related to food.

We can take it, then, that God or a future robot which is metaconscious will have moved on from mere goals to something more sophisticated – metagoals, whatever they are. He, or it, will understand abstractions as well as we understand concrete objects, and will perhaps employ meta-abstractions which they might be a little shakier about. God and the robot will at time have goals, just as we eat food, but their activities in respect of them will be both far more powerful and productive than the simple direct stuff we do and in our eyes utterly unrelated to the simpler goals we can guess at. A lot of the time they may appear to be doing nothing when they are actually pressing forward with an important metaproject.

But look, you may say, we have no reason to suppose this meta stuff exists at all.  Em was not capable of abstract thought; we are. That’s the end of the sequence; you either got it or you don’t got it. We got it: our memory capacity and so on may be improvable, but there isn’t any higher realm. Perhaps God’s objectives would be longer term and more complex than ours, but that’s just a difference of degree.

It could be so, but that’s how things would seem to Em, mutatis mutandis. Rocks don’t get food, he points out; but we early mammals get it. See food, take food, eat food: we get it. Now humans may see further (nice trick, that hind legs thing) they may get bigger food. But this talk of plans means nothing; there’s nothing to your plans and your abstraction thing except getting food. You do get food on a big scale, I notice, but I guess that’s really just luck or some kind of magic. Metaconsciousness would seem similarly unimaginable to us, and its results would equally look like magic, or like miracles.

This all fits very well, of course, with Colin McGinn’s diagnosis. According to him, there’s nothing odd about consciousness in itself, we just lack the mental capacity to deal with it. The mental operations available to us confine us within a certain mental sphere: we are restricted by cognitive closure. It could be that we need metaconsciousness to understand consciousness (and then, unimaginably, metametaconsciousness in order to understand metaconsciousness).

This is an odd place to have ended up, though: we started out with the problem that God is eternal and therefore can’t be conscious: if He can’t be conscious then He certainly can’t ascend to even higher cognitive states, can He? Remember we thought metaconsciousness would probably enable him to understand Platonic abstractions in a way we can’t, and even deal with meta-Platonic entities. Perhaps at that level the apparent contradiction between being unchanging and being aware is removed or bypassed, rather the way that putting five squares together in two dimensions is absurd but a breeze in three: hell, put six together and make a cube of it!

Do I really believe in metaconsciousness? No, but excuse me; I have to go and get food.

chainThe Platopus makes a good point about compatibilism (the view that some worthwhile kind of free will is compatible with the standard deterministic account of the world given by physics).

One argument holds that there isn’t effectively any difference between compatibilists and those who deny the reality of free will. Both deny that radical (or ‘libertarian’) free will exists. They agree that there’s no magic faculty which interrupts the normal causal process with volitions. Given that level of agreement, isn’t it just a matter of what labelling strategy we prefer? Because it’s that radical kind of free will that is really at issue: that’s what people want, not some watered-down legalistic thing.

That’s the argument the Platopus wishes to reject. He accepts that compatibilism involves some redefinition, but draws a distinction between illegitimate and legitimate redefinition. As an example of the latter, he proposes the example of atoms. In Greek philosophy, and at first in the modern science which borrowed the word, ‘atom’ meant something indivisible. There was a period when the atoms of modern physics seemed to be just that, but in due course it emerged that they could in fact be ‘split’. One strategy at that point would have been to say, well, it turns out those things were never atoms after all: we must give them a new name and look elsewhere for our indivisible atoms – or perhaps atoms don’t actually exist after all. What happened in reality was that we went on calling those particles atoms, and gave up our belief that they were indivisible.

In a somewhat similar way, the Platopus argues that it makes sense for us to redefine freedom of the will even though we now know it is not libertarian freedom. The analogy is not perfect, and in some ways the case is actually stronger for free will. Atoms, after all, were originally a hypothesis derived from the purest metaphysics. On one interpretation (just mine, really), the early atomists embraced the idea because they feared that unless the process of division stopped somewhere, the universe would suffer from a radical indeterminism. Division could not stop until the particles were of zero magnitude – non-existent, and how could we make real things out of items which did not themselves exist? They could not have imagined the modern position in which, on one interpretation (yes) as we go more micro the nature of the reality involved changes until the physics has boiled away leaving only maths.

Be that as it was or may be, I think the Platopus is quite right and that the redefinition required by compatibilism is not just respectable but natural and desirable. I think in fact we could go a little further and say that it’s not so much a redefinition as a correction of inherent flaws in the pre-theoretical idea of free will.

What do I mean? Well, the original problem here is that the deterministic physical account seems to leave no room for the will. People try to get round that by suggesting different kinds of indeterminism: perhaps we can get something out of chaos theory, or out of quantum mechanics. The problem with those views is that they go too far and typically end up giving us random action: which is no more what we wanted than determined action. Alternatively, old-fashioned libertarians rely on the intervention of the spirit, typically with no satisfactory account of how the spirit makes decisions or how it manages to intervene. That, I submit, was never really what people meant either: in their Sunday best they might appeal to the action of their soul, but in everyday life having a free choice was something altogether more practical; a matter of not having a knife at your throat.

In short, I’d claim that the pre-theoretical understanding of free will always implicitly took it to be something that went on in a normal physical world, and that’s what compatibilism restores, saving the idea from the mad excrescences added by theologians and philosophers.

Myself I think that the kind of indeterminism we can have, and the one we really need, is the one that comes from our power to think about anything. Most processes in the world can be predicted because the range of factors involved can be known and listed to begin with: our mental processes are not like that. Our neurons may work deterministically according to physics, but they allow us to think about anything at any time: about abstractions,  remote entities, and even imaginary things. Above all, they allow us somehow to think about the future and enable future contingencies (in some acceptable sense) to influence our present decisions. When our actions are determined by our own thoughts about the future, they can properly be called free.

That is not a complete answer: it defers the mystery of freedom to the mystery of intentionality; but I’ll leave that one for now…

magic lanternThere’s an interesting video discussion here at the Institute of Art and Ideas, between Margaret Boden, Steven Rose and Barry Smith, on Neuroscience versus Philosophy.  I’ve never found neuroscientists that belligerent myself; it seems to be mainly other people who make exaggerated claims on behalf of thir subject (although talking up a particular bit of research is not unknown)

While we’re looking at videos, you wouldn’t want to miss Consciousness Central, a series of reports from this year’s Tucson conference on Towards a Science of Consciousness.