How can panpsychists sleep?

sleepOUP Blog has a sort of preview by Bruntrup and Jaskolla of their forthcoming collection on panpsychism, due out in December, with a video of David Chalmers at the end: they sort of credit him with bringing panpsychist thought into the mainstream. I’m using ‘panpsychism’ here as a general term, by the way, covering any view that says consciousness is present in everything, though most advocates really mean that consciousness or experience is everywhere, not souls as the word originally implied.

I found the piece interesting because they put forward two basic arguments for panpsychism, both a little different from the desire for simplification which I’ve always thought was behind it – although it may come down to the same basic ideas in the end.

The first argument they suggest is that ‘nothing comes of nothing’; that consciousness could not have sprung out of nowhere, but must have been there all along in some form. In this bald form, it seems to me that the argument is virtually untenable. The original Scholastic argument that nothing comes of nothing was, I think, a cosmological argument. In that form it works. If there really were nothing, how could the Universe get started? Nothing happens without a cause, and if there were nothing, there could be no causes.  But within an existing Universe, there’s no particular reason why new composite or transformed entities cannot come into existence.  The thing that causes a new entity need not be of the same kind as that entity; and in fact we know plenty of new things that once did not exist but do now; life, football, blogs.

So to make this argument work there would have to be some reason to think that consciousness was special in some way, a way that meant it could not arise out of unconsciousness. But that defies common sense, because consciousness coming out of unconsciousness is something we all experience every day when we wake up; and if it couldn’t happen, none of us would be here as conscious beings at all because we couldn’t have been born., or at least, could never have become aware.

Bruntrup and Jaskolla mention arguments from Nagel and William James;  Nagel’s, I think rests on an implausible denial of emergentism; that is, he denies that a composite entity can have any interesting properties that were not present in the parts. The argument in William James is that evolution could not have conferred some radically new property and that therefore some ‘mind dust’ must have been present all the way back to the elementary particles that made the world.

I don’t find either contention at all appealing, so I may not be presenting them in their best light; the basic idea, I think is that consciousness is just a different realm or domain which could not arise from the physical. Although individual consciousnesses may come and go, consciousness itself is constant and must be universal. Even if we go some way with this argument I’d still rather say that the concept of position does not apply to consciousness than say it must be everywhere.

The second major argument is one from intrinsic nature. We start by noticing that physics deals only with the properties of things, not with the ‘thing in itself’. If you accept that there is a ‘thing in itself’ apart from the collection of properties that give it its measurable characteristics, then you may be inclined to distinguish between its interior reality and its external properties. The claim then is that this interior reality is consciousness. The world is really made of little motes of awareness.

This claim is strangely unmotivated in my view. Why shouldn’t the interior reality just be the interior reality, with nothing more to be said about it? If it does have some other character it seems to me as likely to be cabbagey as conscious. Really it seems to me that only someone who was pretty desperately seeking consciousness would expect to find it naturally in the ding an sich.  The truth seems to be that since the interior reality of things is inaccessible to us, and has no impact on any of the things that are accessible, it’s a classic waste of time talking about it.

Aha, but there is one exception; our own interior reality is accessible to us, and that, it is claimed, is exactly the mysterious consciousness we seek. Now, moreover, you see why it makes sense to think that all examples of this interiority are conscious – ours is! The trouble is, our consciousness is clearly related to the functioning of our brain. If it were just the inherent inner property of that brain, or of our body, it would never go away, and unconsciousness would be impossible. How can panpsychists sleep at night? If panpsychism is true, even a dead brain has the kind of interior awareness that the theory ascribes to everything. In other words, my human consciousness is a quite different thing from the panpsychist consciousness everywhere; somehow in my brain the two sit alongside without troubling each other. My consciousness tells us nothing about the interiority of objects, nor vice versa: and my consciousness is as hard to explain as ever.

Maybe the new book will have surprising new arguments? I doubt it, but perhaps I’ll put it on my Christmas present list.

Alters of the Universe

world alterBernardo Kastrup has some marvellous invective against AI engineers in this piece…

The computer engineer’s dream of birthing a conscious child into the world without the messiness and fragility of life is an infantile delusion; a confused, partial, distorted projection of archetypal images and drives. It is the expression of the male’s hidden aspiration for the female’s divine power of creation. It represents a confused attempt to transcend the deep-seated fear of one’s own nature as a living, breathing entity condemned to death from birth. It embodies a misguided and utterly useless search for the eternal, motivated only by one’s amnesia of one’s own true nature. The fable of artificial consciousness is the imaginary band-aid sought to cover the engineer’s wound of ignorance.

I have been this engineer.

I think it’s untrue, but you don’t have to share the sentiment to appreciate the splendid rhetoric.

Kastrup distinguishes intelligence, which is a legitimate matter of inputs, outputs and the functions that connect them, from consciousness, the true what-it-is likeness of subjectivity. In essence he just doesn’t see how setting up functions in a machine can ever touch the latter.

Not that Kastrup has a closed mind, he speaks approvingly of Pentti Haikonen’s proposed architecture; he just doesn’t think it works. As Kastrup sees it Haikonen’s network merely gathers together sparks of consciousness: it then does a plausible job of bringing them together to form more complex kinds of cognition, but in Kastrup’s eyes it assumes that consciousness is there to be gathered in the first place: that it exists out there in tiny parcels amendable to this kind of treatment. There is in fact, he thinks, absolutely no reason to think that this kind of panpsychism is true: no reason to think that rocks or drops of water have any kind of conscious experience at all.

I don’t know whether that is the right way to construe Haikonen’s project (I doubt whether gathering experiential sparks is exactly what Haikonen supposed he was about). Interestingly, though Kastrup is against the normal kind of panpsychism (if the concept of  ‘normal panpsychism’ is admissible), his own view is essentially a more unusual variety.

Kastrup considers that we’re dealing with two aspects here; internal and external; our minds have both; the external is objective, the internal represents subjectivity. Why wouldn’t the world also have these two aspects? (Actually it’s hard to say why anything should have them, and we may suspect that by taking it as a given we’re in danger of smuggling half the mystery out of the problem, but let that pass.) Kastrup takes it as natural to conclude that the world as a whole must indeed have the two aspects (I think at this point he may have inadvertently ‘proved’ the existence of God in the form of a conscious cosmos, which is regrettable, but again let’s go with it for now); but not parts of the world. The brain, we know, has experience, but the groups of neurons that make it up do not (do we actually know that?); it follows that while the world as a whole has an internal aspect, objects or entities within it generally do not.

Yet of course, the brain manages to have two aspects, which must surely be something to do with the structure of the brain? May we not suspect that whatever it is that allows the brain to have an internal aspect, a machine could in principle have it too? I don’t think Kastrup engages effectively with this objection; his view seems to be that metabolism is essential, though why that should be, and why machines can’t have some form of metabolism, we don’t know.

The argument, then, doesn’t seem convincing, but it must be granted that Kastrup has an original and striking vision: our consciousnesses, he suggests, are essentially like the ‘alters’ of Dissociative Identity Disorder, better known as Multiple Personality, in which several different people seem to inhabit a single human being. We are, he says, like the accidental alternate identities of the Universe (again, I think you could say, of God, though Kastrup clearly doesn’t want to).

As with Kastrup’s condemnation of AI engineering, I don’t think at all that he is right, but it is a great idea. It is probable that in his book-length treatments of these ideas Kastrup makes a stronger case than I have given him credit for above, but I do in any case admire the originality of his thinking, and the clarity and force with which he expresses it.

Not a panpsychist but an emergentist?

lightChristof Koch declares himself a panpsychist in this interesting piece, but I don’t think he really is one. He subscribes to the Integrated Information Theory (IIT) of Giulio Tononi, which holds that consciousness is created by the appropriate integration of sufficient quantities of information. The level of integrated information can be mathematically expressed in a value called Phi: we have discussed this before a couple of times. I think this makes Koch an emergentist, but curiously enough he vigorously denies that.

Koch starts with a quotation about every outside having an inside which aptly brings out the importance of the first-person perspective in all these issues. It’s an implicit theme of what Koch says (in my reading at least) that consciousness is something extra. If we look at the issue from a purely third-person point of view, there doesn’t seem to be much to get excited about. Organisms exhibit different levels of complexity in their behaviour and it turns out that this complexity of behaviour arises from a greater complexity in the brain. You don’t say! The astonishment meter is still indicating zero. It’s only when we add in the belief that at some stage the inward light of consciousness, actual phenomenal experience, has come on that it gets interesting. It may be that Koch wants to incorporate panpsychism into his outlook to help provide that ineffable light, but attempting to make two theories work together is a risky path to take. I don’t want to accuse anyone of leaning towards dualism (which is the worst kind of philosophical bitchiness) but… well, enough said. I think Koch would do better to stick with the austere simplicity of IIT and say: that magic light you think you see is just integrated information. It may look a bit funny but that’s all it is, get used to it.

He starts off by arguing persuasively that consciousness is not the unique prerogative of human beings. Some, he says, have suggested that language is the dividing line, but surely some animals, preverbal infants and so on should not be denied consciousness? Well, no, but language might be interesting, not for itself but because it is an auxiliary effect of a fundamental change in brain organisation, one that facilitates the handling of abstract concepts, say (or one that allows the integration of much larger quantities of information, why not?). It might almost be a side benefit, but also a handy sign that this underlying reorganisation is in place, which would not be to say that you couldn’t have the reorganisation without having actual language. We would then have something, human-style thought, which was significantly different from the feelings of dogs, although the impoverishment of our vocabulary makes us call them both consciousness.

Still, in general the view that we’re dealing with a spectrum of experience, one which may well extend down to the presumably dim adumbrations of worms and insects, seems only sensible.

One appealing way of staying monist but allowing for the light of phenomenal experience is through emergence: at a certain level we find that the whole becomes more than the sum of its parts: we do sort of get something extra, but in an unobjectionable way. Strangely, Koch will have no truck with this kind of thinking. He says

‘the mental is too radically different for it to arise gradually from the physical’.

At first sight this seemed to me almost a direct contradiction of what he had just finished saying. The spectrum of consciousness suggests that we start with the blazing 3D cinema projector of the human mind, work our way down to the magic lanterns of dogs, the candles of newts, and the faint tiny glows of worms – and then the complete darkness of rocks and air. That suggests that consciousness does indeed build up gradually out of nothing, doesn’t it? An actual panpsychist, moreover, pushes the whole thing further, so that trees have faint twinkles and even tiny pieces of clay have a detectable scintilla.

Koch’s view is not, in fact, contradictory: what he seems to want is something like one of those dimmer switches that has a definite on and off, but gradations of brightness when on. He’s entitled to take that view, but I don’t think I agree that gradual emergence of consciousness is unimaginable. Take the analogy of a novel. We can start with Pride and Prejudice, work our way down through short stories or incoherent first drafts, to recipe books or collections of limericks, books with scribble and broken sentences, down to books filled with meaningless lines, and the chance pattern of cracks on a wall. All the way along there will be debatable cases, and contrarians who disbelieve in the real existence of literature can argue against the whole thing (‘You need to exercise your imagination to make Pride and Prejudice a novel; but if you are willing to use your imagination I can tell you there are finer novels in the cracks on my wall than anything Jane bloody Austen ever wrote…’) : but it seems clear enough to me that we can have a spectrum all the way down to nothing. That doesn’t prove that consciousness is like that, but makes it hard to assert that it couldn’t be.
The other reason it seems odd to hear such an argument from Koch is that he espouses the IIT which seems to require a spectrum which sits well with emergentism. Presumably on Koch’s view a small amount of integrated information does nothing, but at some point, when there’s enough being integrated, we start to get consciousness? Yet he says:

“if there is nothing there in the first place, adding a little bit more won’t make something. If a small brain won’t be able to feel pain, why should a large brain be able to feel the god-awfulness of a throbbing toothache? Why should adding some neurons give rise to this ineffable feeling?”

Well, because a small brain only integrates a small amount of information, whereas a large on integrates enough for full consciousness? I think I must be missing something here, but look at this.

“ [Consciousness] is a property of complex entities and cannot be further reduced to the action of more elementary properties. We have reached the ground floor of reductionism.”

Isn’t that emergence? Koch must see something else which he thinks is essential to emergentism which he doesn’t like, but I’m not seeing it.

The problem with Koch being panpsychist is that for panpsychists souls (or in this case consciousness) have to be everywhere. Even a particle of stone or a screwed-up sheet of wrapping paper must have just the basic spark; the lights must be at least slightly on. Koch doesn’t want to go quite that far – and I have every sympathy with that, but it means taking the pan out of the panpsychist. Koch fully recognises that he isn’t espousing traditional full-blooded panpsychism but in my opinion he deviates too far to be entitled to the badge. What Koch believes is that everything has the potential to instantiate consciousness when correctly organised and integrated. That amounts to no more than believing in the neutrality of the substrate, that neurons are not essential and that consciousness can be built with anything so long as its functional properties are right. All functionalists and a lot of other people (not everyone, of course) believe that without being panpsychists.

Perhaps functionalism is really the direction Koch’s theories lean towards. After all, it’s not enough to integrate information in any superficial way. A big database which exhaustively cross-referenced the Library of Congress would not seem much of a candidate for consciousness. Koch realises that there have to be some rules about what kinds of integration matter, but I think that if the theory develops far enough these other constraints will play an increasingly large role, until eventually we find that they have taken over the theory and the quantity of integrated information has receded to the status of a necessary but not sufficient condition.

I suppose that that might still leave room for Tononi’s Phi meter, now apparently built, to work satisfactorily. I hope it does, because it would be pretty useful.

Quark consciousness

quarkOne of the main objections to panpsychism, the belief that mind, or at any rate experience, is everywhere, is that it doesn’t help. The point of a theory is to take an issue that was mysterious to begin with and make it clear; but panpsychism seems to leave us with just as much explaining to do as before. In fact, things may be worse. To begin with we only needed to explain the occurrence of consciousness in the human brain; once we embrace panpsychism we have to explain it’s occurrence everywhere and account for the difference between the consciousness in a lump of turf and the consciousness in our heads. The only way that could be an attractive option would be if there were really good and convincing answers to these problems ready to hand.

Creditably, Patrick Lewtas recognises this and rolling up his sleeves has undertaken the job of explaining first, how ‘basic bottom level experience’ makes sense, and second, how it builds up to the high-level kind of experience going on in the brain. A first paper, tackling the first question, “What is it like to be a Quark” appeared in the JCS recently (Alas, there doesn’t seem to be an online version available to non-subscribers.)

Lewtas adopts an idiosyncratic style of argument, loading himself with Constraints like a philosophical Houdini.

  1. Panpsychism should attribute to basic physical objects all but only those types of experiences needed to explain higher-level (including, but not limited to, human) consciousness.
  2. Panpsychism must eschew explanatory gaps.
  3. Panpsychism must eschew property emergence.
  4. Maximum possible complexity of experience varies with complexity of physical structure.
  5. Basic physical objects have maximally simple structures. They lack parts, internal structure, and internal processes.
  6. Where possible and appropriate, panpsychism should posit strictly-basic conscious properties similar, in their higher-order features to strictly-basic physical properties.
  7. Basic objects with strictly-basic experiences have the constantly and continuously.
  8. Each basic experience-type, through its strictly-basic  instances. characterizes (at least some) basic physical objects.

Of course it is these very constraints that end up getting him where he wanted to be all along.  To justify each of them and give the implications would amount to reproducing the paper; I’ll try to summarise in a freer style here.

Lewtas wants his basic experience to sit with basic physical entities and he wants it to be recognisably the same kind of thing as the higher level experience. This parsimony is designed to avoid any need for emergence or other difficulties; if we end up going down that sort of road, Lewtas feels we will fall back into the position where our theory is too complex to be attractive in competition with more mainstream ideas. Without seeming to be strongly wedded to them, he chooses to focus on quarks as his basic unit, but he does not say much about the particular quirks of quarks; he seems to have chosen them because they may have the property he’s really after; that of having no parts.

The thing with no parts! Aiee! This ancient concept has stalked philosophy for thousands of years under different names: the atom, a substance, a monad (the first two names long since passed on to other, blameless ideas). I hesitate to say that there’s something fundamentally problematic with the concept itself (it seems to work fine in geometry); but in philosophy it seems hard to handle without generating a splendid effusion of florid metaphysics.  The idea of yoking it together with the metaphysically tricky modern concept of quarks makes my hair stand on end. But perhaps Lewtas can keep the monster in check: he wants it, presumably, because he wants to build on bedrock, with no question of basic experience being capable of further analysis.

Some theorists, Lewtas notes, have argued that the basic level experience of particles must be incomprehensible to us; as incomprehensible as the experiences of bats according to Nagel, or indeed even worse. Lewtas thinks things can, and indeed must, be far simpler and more transparent than that. The experience of a quark, he suggests, might just be like the simple experience of red; red detached from any object or pattern, with no limits or overtones or significance; just red.  Human beings can most probably never achieve such simplicity in its pure form, but we can move in that direction and we can get our heads around ‘what it’s like’ without undue difficulty.

Now the partless thing begins to give trouble; a thing which has no parts cannot change, because change would imply some kind of reorganisation or substitution; you can’t rearrange something that has no parts and if you substitute anything you have to substitute another whole thing for the first one, which is not change but replacement. At best the thing’s external relations can change. If one of the properties of the quark is an experience of red, therefore, that’s how it stays. It carries on being an experience of red, and it does not respond in any way to its environment or anything outside itself. I think we can be forgiven if we already start to worry a little about how this is going to work with a perceptual system, but that is for the later paper.

Lewtas is aware that he could be in for an awfully large catalogue of experiences here if every possible basic experience has to be assigned to a quark. His hope is that some experiences will turn out to be composites, so that we’ll be able to make do with a more restricted set: and he gives the example of orange experience reducing to red and yellow experience. A bad example: orange experience is just orange experience, actually, and the fact that orange paint can be made by mixing red and yellow paint is just a quirk of the human visual system, not an essential quality of orange light or orange phenomenology. A bad example doesn’t mean the thesis is false; but a comprehensive reduction of phenomenology to a manageable set of basic elements is a pretty non-trivial requirement. I think in fact Lewtas might eventually be forced to accept that he has to deal with an infinite set of possible basic experiences. Think of the experience of unity, duality, trinity…  That’s debatable, perhaps.

At any rate Lewtas is prepared to some extent. He accepts explicitly that the number of basic experiences will be greater than the number of different kinds of basic quark, so it follows that basic physical units must be able to accommodate more than one basic experience at the same time. So your quark is having a simple, constant experience of red and at the same time it’s having a simple, constant experience of yellow.

That has got to be a hard idea for Lewtas to sell. It seems to risk the simple transparency which was one of his main goals, because it is surely impossible to imagine what having two or more completely pure but completely separate experiences at the same time is like.  However, if that bullet is bitten, then I see no particular reason why Lewtas shouldn’t allow his quarks to have all possible experiences simultaneously (my idea, not his).

By the time we get to this point I find myself wondering what the quarks, or the basic physical units, are contributing to the theory. It’s not altogether clear how the experiences are anchored to the quarks and since all experiences are going to have to be readily available everywhere, I wonder whether it wouldn’t simplify matters to just say that all experiences are accessible to all matter. That might be one of the many issues cleared up in the paper to follow where perhaps, with one cat-like leap, Lewtas will escape the problems which seem to me to be on the point of having him cornered…

Decomissioning Physicalism

Picture: Honeycomb series. Panpsychism, or perhaps it would be more accurate to say panexperientialism, has seemed to be quite a popular view in previous discussions here, but I’ve always found it problematic. Panexperientialism, as the name suggests, is the belief that experience is everywhere; that experience is the basis of reality, out of which everything else is built.  Objects which seem dead and inanimate ultimately consist of experience just as much as we do: it just doesn’t seem like that because the experiences which make them up are not our experiences.  The attractive feature of this view is that it removes some of the mystery from consciousness: instead of being a very rare phenomenon which only occurs in very specific circumstances, such as those which exist in our brains, consciousness of a sort is universal, and so it’ s not at all surprising that we ourselves are conscious.

One of the problems is the question of how many experiential loci we’re dealing with. Does the table have experiences? Does half the table? Do the table legs have four separate sets of experiences, and at the same time a sort of federal joint experience as a composite entity? There are ways to solve these problems, but they’re distinctly off-putting to me. More fundamentally, I’m inclined to doubt whether the theory is as helpful in explaining things as  it seems. OK, so my brain has experience just because it’s an object and all objects have experience; but surely that brain-as-an-object experience is the same kind of thing as the experiences rocks have, while the experience I’m interested in, the kind that influences my bodily behaviour, is something else; something which remains unexplained.

One of the sources of difficulty here, I think, as with many metaphysical theories, is that the philosophical point of view is not well integrated with any clear scientific conception.  When we need to pin down our loci of experience we’re left to rummage around and see what we can come up with – atoms? Too small.  Discrete physical entities? What exactly are they? (Shintoism, if I understand it correctly, has bitten this kind of bullet and given up on a sharp definition of what is animate: lots of things can have souls, but only if they’re salient or impressive. Mount Fuji definitely gets one, but some anonymous pile of dirt in your back garden is just a pile of dirt.)

But what about Finite Eventism? This theory (as expounded by Carey R. Carlson in Chapter 12 of Mind that abides: Panpsychism in the New Millennium) is a theory of physics first and foremost, one that happens to provide a neat basis for a solution to the mind-body problem taking a panpsychist/panexperientialist view. It is based on the late ideas of Russell and Whitehead, though one of its appealing features is that it dovetails well with quantum theory in a way Russell and Whitehead were not aware of.

The gist of the theory is a radical reduction of physics to a minimalist ontology consisting of events and the basic temporal relations of being earlier or later (there’s also cause and effect, which I take to be causally connected varieties of earlier/later). There are some rules which prevent inconsistency (events can’t be earlier/later than themselves) and that’s it. In particular, there is no initial concept of space or extension. We are allowed convergent and divergent causal paths, so we can construct complex multiple ‘honeycomb’ pathways like the one shown. The four consistent axes which appear in these diagrams are taken to make up a 4-D manifold of space-time, with neutrinos and electrons delineated by gaps, and the repetition of patterns in sequence representing persistence over time.  Quanta, in this theory, are represented by the steps between two events; the number of intermediate steps between two events corresponds with the relative frequency of the relevant path, and these relative frequency ratios provide relative energy ratios, following Planck’s E=hf.

I hope those brief,  inadequate remarks give a hint of how the basics of physics can be built up in a very elegant manner from the simple topology of these sequences: it looks impressive, though I must frankly admit that I’m not competent to explain the theory properly, never mind evaluate it. Readers may like to look at this short description (pdf).

The question for us is, what are these ‘events’? Carlson follows Whitehead in seeing them all as ‘occasions of experience’; in some ways they resemble the monads of Leibniz’s radically relativist ontology;  they are pure phenomenal moments. Carlson argues that the basis of all science is phenomenal experience; historically, in order to account for those experiences better through Newtonian style physics it became necessary to postulate unexperienced abstract entities; and then the phenomenal experience dropped out of the theory leaving us with a world made of fundamentally unaccountable entities.

The best part of it for me is that the theory provides relatively good and clear answers to the problems I mentioned above. It’s clear that the loci of experience are situated in the events: I take it that continuity of experience is guaranteed in exactly the same way as physical continuity by repeating patterns (though what the nature of those patterns amounts to is an interesting question).  If that’s so, then the relationship between the consciousness of my brain-as-object and the consciousness of my brain-as-brain is also helpfully clarified.

How far, though, is the actual nature of my consciousness clarified? The explanation of most of my mental characteristics is deferred upwards to be explained by the working of the brain. That is, no doubt, exactly as it should be: but it leaves me with no particular reason to adopt panpsychism. The one feature which the theory does explain is phenomenal experience (alright, a fairly important feature!); but it really only does so by telling us that things just do have phenomenal experience.  Why should we take it that the events are phenomenal in nature – doesn’t ontological parsimony suggest they should be featureless blips?

Still, I think this is the most viable and attractive formulation of panpsychism/panexperientialism I’ve seen.

Phi

Picture: Phi. I was wondering recently what we could do with all the new computing power which is becoming available.  One answer might be calculating phi, effectively a measure of consciousness, which was very kindly drawn to my attention by Christof Koch. Phi is actually a time- and state-dependent measure of integrated information developed by Giulio Tononi in support of the Integrated Information Theory (IIT) of consciousness which he and Koch have championed.  Some readable expositions of the theory are here and here with the manifesto here and a formal paper presenting phi here. Koch says the theory is the most exciting conceptual development he’s seen in “the inchoate science of consciousness”, and I can certainly see why.

The basic premise of the theory is simply that consciousness is constituted by integrated information. It stems from the phenomenological observations that there are vast numbers of possible conscious states, and that each of them appears to unify or integrate a very large number of items of information. What really lifts the theory above the level of most others in this area is the detailed mathematical under-pinning, which means phi is not a vague concept but a clear and possibly even a practically useful indicator.

One implication of the theory is that consciousness lies on a continuum: rather than being an on-or-off matter, it comes in degrees. The idea that lower levels of consciousness may occur when we are half-awake, or in dogs or other animals, is plausible and appealing. Perhaps a little less intuitive is the implication that there must be in theory be higher states of consciousness than any existing human being could ever have attained. I don’t think this means states of greater intelligence or enlightenment, necessarily; it’s more  a matter of being more awake than awake, an idea which (naturally enough, I suppose) is difficult to get one’s head around, but has a tantalising appeal.

Equally, the theory implies that some minimal level of consciousness goes a long way down to systems with only a small quantity of integrated information. As Koch points out, this looks like a variety of panpsychism or panexperientialism, though I think the most natural interpretation is that real consciousness probably does not extend all that far beyond observably animate entities.

One congenial aspect of the theory for me is that it puts causal relations at the centre of things: while a system with complex causal interactions may generate a high value of phi, a ‘replay’ of its surface dynamics would not. This seems to capture in a clearer form the hand-waving intuitive point I was making recently in discussion of Mark Muhlestein’s ideas.  Unfortunately calculation of Phi for the human brain remains beyond reach at the moment due to the unmanageable levels of complexity involved;  this is disappointing, but in a way it’s only what you would expect. Nevertheless, there is, unusually in this field, some hope of empirical corroboration.

I think I’m convinced that phi measures something interesting and highly relevant to consciousness; perhaps it remains to be finally established that what it measures is consciousness itself, rather than some closely associated phenomenon, some necessary but not sufficient condition. Your view about this, pending further evidence, may be determined by how far you think phenomenal experience can be identified with information. Is consciousness in the end what information – integrated information – just feels like from the inside? Could this be the final answer to the insoluble question of qualia? The idea doesn’t strike me with the ‘aha!’ feeling of the blinding insight, but (and this is pretty good going in this field) it doesn’t seem obviously wrong either.  It seems the right kind of answer, the kind that could be correct.

Could it?

Dancing Pixies

Picture: Dancing Pixies. I see that among the first papers published by the  recently-launched journal of Cognitive Computation, they sportingly included one arguing that we shouldn’t be seeing cognition as computational at all.  The paper, by John Mark Bishop of Goldsmith’s, reviews some of the anti-computational arguments and suggests we should think of cognitive processes in terms of communication and interaction instead.

The first two objections to computation are in essence those of Penrose and Searle, and both have been pretty thoroughly chewed over in previous discussions in many places; the first suggests that human cognition does not suffer the Gödelian limitations under which formal systems must labour, and so the brain cannot be operating under a formal system like a computer program; the second is the famous Chinese Room thought experiment. Neither objection is universally accepted, to put it mildly,  and I’m not sure that Bishop is saying he accepts them unreservedly himself – he seems to feel that having these popular counter-arguments in play is enough of a hit against computationalism in itself to make us want to look elsewhere.

The third case against computationalism is the pixies: I believe this is an argument of Bishop’s own, dating back a few years, though he scrupulously credits some of the essential ideas to Putnam and others.  A remarkable feature of the argument is that uses panpsychism in a reductio ad absurdum (reductio ad absurdum is where you assume the truth of the thing you’re arguing against, and then show that it leads to an absurd, preferably self-contradictory conclusion).

Very briefly, it goes something like this; if computationalism is true, then anything with the right computational properties has true consciousness (Bishop specifies Ned Block’s p-consciousness, phenomenal consciousness, real something-that-it-is-like experience). But a computation is just a given series of states, and those states can be indicated any way we choose.  It follows that on some interpretation, the required kind of series of states are all over the place all the time. If that were true, consciousness would be ubiquitous, and panpsychism would be true (a state of affairs which Bishop represents as being akin to a world full of pixies dancing everywhere). But since, says Bishop, we know that panpsychism is just ridiculous, that must be wrong, and it follows that our initial assumption was incorrect; computationalism is false after all.

There are of course plenty of people who would not accept this at all, and would instead see the whole argument as just another reason to think that panpsychism might be true after all. Bishop does not spend much time on explaining why he thinks panpsychism is unacceptable, beyond suggesting that it is incompatible with the ‘widespread’ desire to explain everything in physical terms, but he does take on some other objections more explicitly.  These mostly express different kinds of uneasiness about the idea that an arbitrary selection of things could properly constitute a computation with the right properties to generate consciousness.

One of the more difficult is an objection from Hofstadter that the required sequences of states can only be established after the fact: perhaps we could copy down the states of a conscious experience and then reproduce them, but not determine them in advance. Bishop uses an argument based on running the same consciousness program on a robot twice; the first time we didn’t know how it would turn out; the second time we did (because it’s an identical robot and identical program) but it’s absurd to think that one run could be conscious and the other not. 

Perhaps the most tricky objection mentioned is from Chalmers; it points out that cognitive processes are not pre-ordained linear sequences of states, but at every stage have the possibility of branching off and developing differently. We could, of course remove every conditional switch in a given sequence of conscious cognition and replace it by a non-conditional one leading on to the state which was in fact the next one chosen. For that given sequence, the outputs are the same – but we’re not entitled to presume that consious experience would arise in the same way because the functional organisation is clearly different, and that is the thing, on computationalist reasoning, which needs to be the same.  Bishop therefore imagine a more refined version: two robots run similar programs; one program has been put through a code optimiser which keeps all the conditional branches but removes bits of code which follow, as it were, the unused branches of the conditionals. Now surely everything relevant is the same: are we going to say that consciousness arises in one robot by virtue of there being bits of extra code there which lie there idle? That seems odd.

That argument might work, but we must remember that Bishop’s reductio requires the basics of consciousness to be lying around all over the place, instantiated by chance in all sorts of things. While we were dealing with mere sequences of states, that might look plausible, but if we have to have conditional branches connecting the states (even ones whose unused ends have been pruned) it no longer seems plausible to me.  So in patching up his case to respond to the objection, Bishop seems to me to have pulled out some of the foundations it was originally standing on. In fact, I think that consciousness requires the right kind of causal relations between mental states, so that arbitrary sets or lists of states won’t do.

The next part of the discussion is interesting. In many ways computationalism looks like a productive strategy, concedes Bishop – but there are reasons to think it has its limitations. One of the arguments he quotes here is the Searlian point that there is a difference between a simulation and reality. If we simulate a rainstorm on a computer, no-one expects to get wet; so if we simulate the brain, why should we expect consciousness? Now the distinction between a simulation and the real thing is a relevant and useful one, but the comparison of rain and consciousness begs the question too much to serve as an argument. By choosing rain as the item to be simulated, we pick something whose physical composition is (in some sense) essential; if it isn’t made of water it isn’t rain. To assume that the physical substrate is equally essential for consciousness is just to assume what computationalism explicitly denies.  Take a different example; a letter. When I write a letter on my PC, I don’t regard it as a mere simulation, even though no paper need be involved until it’s printed; in fact, I have more than once written letters which were evntually sent as email attachments and never achieved physical form. This is surely because with a letter, the information is more essential than the physical instantiation. Doesn’t it seem highly plausible that the same might be true to an even greater extent of consciousness? If it is true, then the distinction between simulation and reality ceases to apply.

To make sceptical simulation arguments work, we need a separate reason to think that some computation was more like a simulation than the reality – and the strange thing is, I think that’s more or less what the objections from Hofstadter and Chalmers were giving us; they both sort of draw on the intuition that a sequence of states could only simulate consciousness  in the sort of way a series of film frames simulates motion.

The ultimate point, for Bishop, is to suggest we should move on from the ‘metaphor’ of computation to another based on communication. It’s true that the idea of computation as the basis of consciousness has run into problems over recent years, and the optimism of its adherents has been qualified by experience; on the other hand it still has some remarkable strengths. For one thing, we understand computation pretty clearly and thoroughly;  if we could reduce consciousness to computation, the job really would be done; whereas if we reduce consciousness to some notion of communication which still (as Bishop says) requires development and clarification, we may still have most of the explanatory job to do.

The other thing is that computation of some kind, if not the only game in town, still is far closer to offering a complete answer than any other hypothesis.  I supect many people who started out in opposing camps on this issue would agree now that the story of consciousness is more likely to be ‘computation plus plus’ (whatever that implies) than something quite unrelated.