Posts tagged ‘IIT’

Are we losing it?

Nick Bostrom’s suggestion that we’re most likely living in a simulated world continues to provoke discussion.  Joelle Dahm draws an interesting parallel with multiverses. I think myself that it depends a bit on what kind of multiverse you’re going for – the ones that come from an interpretation of quantum physics usually require conservation of identity between universes – you have to exist in more than one universe – which I think is both potentially problematic and strictly speaking non-Bostromic. Dahm also briefly touches on some tricky difficulties about how we could tell whether we were simulated or not, which seem reminiscent of Descartes’ doubts about how he could be sure he wasn’t being systematically deceived by a demon – hold that thought for now.

Some of the assumptions mentioned by Dahm would probably annoy Sabine Hossenfelder, who lays into the Bostromisers with a piece about just how difficult simulating the physics of our world would actually be: a splendid combination of indignation with actually knowing what she’s talking about.

Bostrom assumes that if advanced civilisations typically have a long lifespan, most will get around to creating simulated versions of their own civilisation, perhaps re-enactments of earlier historical eras. Since each simulated world will contain a vast number of people, the odds are that any randomly selected person is in fact living in a simulated world. The probability becomes overwhelming if we assume that the simulations are good enough for the simulated people to create simulations within their own world, and so on.

There’s  plenty of scope for argument about whether consciousness can be simulated computationally at all, whether worlds can be simulated in the required detail, and certainly about the optimistic idea of nested simulations. But recently I find myself thinking, isn’t it simpler than that? Are we simulated people in a simulated world? No, because we’re real, and people in a simulation aren’t real.

When I say that, people look at me as if I were stupid, or at least, impossibly naive. Dude,  read some philosophy, they seem to say. Dontcha know that Socrates said we are all just grains of sand blowing in the wind?

But I persist – nothing in a simulation actually exists (clue’s in the name), so it follows that if we exist, we are not in a simulation. Surely no-one doubts their own existence (remember that parallel with Descartes), or if they do, only on the kind of philosophical level where you can doubt the existence of anything? If you don’t even exist, why do I even have to address your simulated arguments?

I do, though. Actually, non-existent people can have rather good arguments; dialogues between imaginary people are a long-established philosophical method (in my feckless youth I may even have indulged in the practice myself).

But I’m not entirely sure what the argument against reality is. People do quite often set out a vision of the world as powered by maths; somewhere down there the fundamental equations are working away and the world is what they’re calculating. But surely that is the wrong way round; the equations describe reality, they don’t dictate it. A system of metaphysics that assumes the laws of nature really are explicit laws set out somewhere looks tricky to me; and worse, it can never account for the arbitrary particularity of the actual world. We sort of cling to the hope that this weird specificity can eventually be reduced away by titanic backward extrapolation to a hypothetical time when the cosmos was reduced to the simplicity of a single point, or something like it; but we can’t make that story work without arbitrary constants and the result doesn’t seem like the right kind of explanation anyway. We might appeal instead to the idea that the arbitrariness of our world arises from it’s being an arbitrary selection out of the incalculable banquet of the multiverse, but that doesn’t really explain it.

I reckon that reality just is the thing that gets left out of the data and the theory; but we’re now so used to the supremacy of those two we find it genuinely hard to remember, and it seems to us that a simulation with enough data is automatically indistinguishable from real events – as though once your 3D printer was programmed, there was really nothing to be achieved by running it.

There’s one curious reference in Dahm’s piece which makes me wonder whether Christof Koch agrees with me. She says the Integrated Information Theory doesn’t allow for computer consciousness. I’d have thought it would; but the remarks from Koch she quotes seem to be about how you need not just the numbers about gravity but actual gravity too, which sounds like my sort of point.

Regular readers may already have noticed that I think this neglect of reality also explains the notorious problem of qualia; they’re just the reality of experience. When Mary sees red, she sees something real, which of course was never included in her perfect theoretical understanding.

I may be naive, but you can’t say I’m not consistent…

You can’t build experience out of mere information. Not, at any rate, the way the Integrated Information Theory (IIT) seeks to do it. So says Garrett Mindt in a forthcoming paper for the JCS.

‘Information’ is notoriously a slippery term, and much depends on how you’re using it. Commonly people distinguish the everyday meaning, which makes information a matter of meaning or semantics, and the sense defined by Shannon, which is statistical and excludes meaning, but is rigorous and tractable.

It is a fairly common sceptical claim that you cannot get consciousness, or anything like intentionality or meaning, out of Shannon-style information. Mindt describes in his paper a couple of views that attack IIT on similar grounds. One is by Cerullo, who says:

‘Only by including syntactic, and most importantly semantic, concepts can a theory of information hope to model the causal properties of the brain…’

The other is by Searle, who argues that information, correctly understood, is observer dependent. The fact that this post, for example, contains information depends on conscious entities interpreting it as such, or it would be mere digital noise. Since information, defined this way, requires consciousness, any attempt to derive consciousness from it must be circular.

Although Mindt is ultimately rather sympathetic to both these cases, he says they fail because they assume that IIT is working with a Shannonian conception of information: but that’s not right. In fact IIT invokes a distinct causal conception of information as being ‘a difference that makes a difference’. A system conveys information, in this sense, if it can induce a change in the state of another system. Mindt likens this to the concept of information introduced by Bateson.

Mindt makes the interesting point that Searle and others tend to carve the problem up by separating syntax from semantics; but it’s not clear that semantics is required for hard-problem style conscious experience (in fact I think the question of what, if any, connection there is between the two is puzzling and potentially quite interesting). Better to use the distinction favoured by Tononi in the context of IIT, between extrinsic information – which covers both syntax and semantics – and intrinsic, which covers structure, dynamics, and phenomenal aspects.

Still, Mindt finds IIT vulnerable to a slightly different attack. Even with the clarifications he has made, the theory remains one of structure and dynamics, and physicalist structure and dynamics just don’t look like the sort of thing that could ever account for the phenomenal qualities of experience. There is no theoretical bridge arising from IIT that could take us across the explanatory gap.

I think the case is well made, although unfortunately it may be a case for despair. If this objection stands for IIT then it most likely stands for all physicalist theories. This is a little depressing because on one point of view, non-physicalist theories look unattractive. From that perspective, coming up with a physical explanation of phenomenal experience is exactly the point of the whole enquiry; if no such explanation is possible, no decent answer can ever be given.

It might still be the case that IIT is the best theory of its kind, and that it is capable of explaining many aspects of consciousness. We might even hope to squeeze the essential Hard Problem to one side. What if IIT could never explain why the integration of information gives rise to experience, but could explain everything, or most things, about the character of experience? Might we not then come to regard the Hard Problem as one of those knotty tangles that philosophers can mull over indefinitely, while the rest of us put together a perfectly good practical understanding of how mind and brain work?

I don’t know what Mindt would think about that, but he rounds out his case by addressing one claimed prediction of IIT; namely that if a large information complex is split, the attendant consciousness will also divide. This looks like what we might see in split-brain cases, although so far as I can see, nobody knows whether split-brain patients have two separate sets of phenomenal experiences, and I’m not sure there’s any way of testing the matter. Mindt points out that the prediction is really a matter of  ‘Easy Problem’ issues and doesn’t help otherwise: it’s also not an especially impressive prediction, as many other possible theories would predict the same thing.

Mindt’s prescription is that we should go back and have another try at that definition of information; without attempting to do that he smiles on dual aspect theories. I’m afraid I am left scowling at all of them; as always in this field the arguments against any idea seem so much better than the ones for.


Is consciousness a matter of entropy in the brain? An intriguing paper by R. Guevara Erra, D. M. Mateos, R. Wennberg, and J.L. Perez Velazquez says

normal wakeful states are characterised by the greatest number of possible configurations of interactions between brain networks, representing highest entropy values.

What the researchers did, broadly, is identify networks in the brain that were operative at a given time, and then work out the number of possible configurations these networks were capable of. In general, conscious states were associated with states with high numbers of possible configurations – high levels of entropy.

That makes me wrinkle my forehead a bit because it doesn’t fit well with my layman’s grasp of the concept of entropy. In my mind entropy is associated with low levels of available energy and an absence of large complex structure. Entropy always increases, but can decrease locally, as in the case of the complex structures of life, by paying for the decrease with a bigger increase elsewhere; typically by using up available energy. On this view, conscious states – and high levels of possible configurations – look like they ought to be low entropy; but evidently the reverse is actually the case. The researchers also used the Lempel-Ziv measure of complexity, one with strong links to information content, which is clearly an interesting angle in itself.

Of the nine subjects, three were epileptic, which allowed comparisons to be made with seizure states as well as those of waking and sleeping states. Interestingly, REM sleep showed relatively high entropy levels, which intuitively squares with the idea that dreaming resembles waking a little more than  fully unconscious states – though I think the equation of REM sleep with dreaming is not now thought to be as perfect as it once seemed.

One acknowledged weakness in the research is that it was not possible to establish actual connection. The assumed networks were therefore based on synchronisation instead. However, synchronisation can arise without direct connection, and absence of synchronisation is not necessarily proof of the absence of connection.

Still, overall the results look good and the picture painted is intuitively plausible. Putting all talk of entropy and Lempel-Ziv aside, what we’re really saying is that conscious states fall in the middle of a notional spectrum: at one end of this spectrum is chaos, with neutrons firing randomly; at the other we have then all firing simultaneously in indissoluble lockstep.

There is an obvious resemblance here to the Integrated Information Theory (IIT) which holds that consciousness arises once the quantity of information being integrated passes a value known as Phi. In fact, the authors of the current paper situate it explicitly within the context of earlier work which suggests that the general principle of natural phenomena is the maximisation of information transfer. The read-across from the new results into terms of information processing is quite clear. The authors do acknowledge IIT, but just barely; they may be understandably worried that their new work could end up interpreted as mere corroboration for IIT.

My main worry about both is that they are very likely true, but may not be particularly enlightening. As a rough analogy, we might discover that the running of an internal combustion engine correlates strongly with raised internal temperature states. The absence of these states proves to be a pretty good practical guide to whether the engine is running, and we’re tempted to conclude that raised temperature is the same as running. Actually, though, raising the temperature artificially does not make the engine run, and there is in fact a complex story about running in which raised temperatures are not really central. So it might be that high entropy is characteristic of conscious states without that telling us anything useful about how those states really work.

But I evidently don’t really get entropy, so I might easily be missing the true significance of all this.

doorknob‘…stupid as a doorknob…’ Just part of Luboš Motl’s vigorous attack on Scott Aaronson’s critique of IIT, the Integrated Information Theory of Giulio Tononi.

To begin at the beginning. IIT says that consciousness arises from integrated information, and proposes a mathematical approach to quantifying the level of integrated information in a system, a quantity it names Phi (actually there are several variant ways to define Phi that differ in various details, which is perhaps unfortunate). Aaronson and Motl both describe this idea as a worthy effort but both have various reservations about it – though Aaronson thinks the problems are fatal while Motl thinks IIT offers a promising direction for further work.

Both pieces contain a lot of interesting side discussion, including Aaronson’s speculation that approximating phi for a real brain might be an NP-hard problem. This is the digression that prompted the doorknob comment: so what if it were NP-hard, demands Motl; you think nature is barred from containing NP-hard problems?

The real crux as I understand it is Aaronson’s argument that we can give examples of systems with high scores for Phi that we know intuitively could not be conscious. Eric Schwitzgebel has given a somewhat similar argument but cast in more approachable terms; Aaronson uses a Vandermonde matrix for his example of a high-phi but intuitively non-conscious entity, whereas Schwitzgebel uses the United States.

Motl takes exception to Aaronson’s use of intuition here. How does he know that his matrix lacks consciousness? If Aaronson’s intuition is the test, what’s the point of having a theory? The whole point of a theory is to improve on and correct our intuitive judgements, isn’t it? If we’re going to fall back on our intuitions argument is pointless.

I think appeals to intuition are rare in physics, where it is probably natural to regard them as illegitimate, but they’re not that unusual in philosophy, especially in ethics. You could argue that G.E. Moore’s approach was essentially to give up on ethical theory and rely on intuition instead. Often intuition limits what we regard as acceptable theorising, but our theories can also ‘tutor’ and change our intuitions. My impression is that real world beliefs about death, for example, have changed substantially in recent decades under the influence of utilitarian reasoning; we’re now much less likely to think that death is simply forbidden and more likely to accept calculations about the value of lives. We still, however, rule out as unintuitive (‘just obviously wrong’) such utilitarian conclusions as the propriety of sometimes punishing the innocent.

There’s an interesting question as to whether there actually is, in itself, such a thing as intuition. Myself I’d suggest the word covers any appealing pre-rational thought; we use it in several ways. One is indeed to test our conclusions where no other means is available; it’s interesting that Motl actually remarks that the absence of a reliable objective test of consciousness is one of IIT’s problems; he obviously does not accept that intuition could be a fall-back, so he is presumably left with the gap (which must surely affect all theories of consciousness). Philosophers also use an appeal to intuition to help cut to the chase, by implicitly invoking shared axioms and assumptions; and often enough ‘thought experiments’ which are not really experiments at all but in the Dennettian phrase ‘intuition pumps’ are used for persuasive effect; they’re not proofs but they may help to get people to agree.

Now as a matter of fact I think in Aaronson’s case we can actually supply a partial argument to replace pure intuition. In this discussion we are mainly talking about subjective consciousness, the ‘something it is like’ to experience things. But I think many people would argue that that Hard Problem consciousness requires the Easy Problem kind to be in place first as a basis. Subjective experience, we might argue, requires the less mysterious apparatus of normal sensory or cognitive experience; and Aaronson (or Schwitzgebel) could argue that their example structures definitely don’t have the sort of structure needed for that, a conclusion we can reach through functional argument without the need for intuition,

Not everybody would agree, though; some, especially those who lean towards panpsychism and related theories of ‘consciousness everywhere’ might see nothing wrong with the idea of subjective consciousness without the ‘mechanical’ kind. The standard philosophical zombie has Easy Problem consciousness without qualia; these people would accept an inverted zombie who has qualia with no brain function. It seems a bit odd to me to pair such a view with IIT (if you don’t think functional properties are required I’d have thought you would think that integrating information was also dispensable) but there’s nothing strictly illogical about it. Perhaps the dispute over intuition really masks a different disagreement, over the plausibility of such inverted zombies, obviously impossible in  Aaronson’s eyes, but potentially viable in Motl’s?

Motl goes on to offer what I think is a rather good objection to IIT as it stands; ie that it seems to award consciousness to ‘frozen’ or static structures if they have a high enough Phi score. He thinks it’s necessary to reformulate the idea to capture the point that consciousness is a process. I agree – but how does Motl know consciousness requires a process? Could it be that it’s just…  intuitively obvious?

boxers…for two theories?

Ihtio kindly drew my attention to an interesting paper which sets integrated information theory (IIT) against its own preferred set of ideas – semantic pointer competition (SPC). I’m not quite sure where this ‘one on one’ approach to theoretical discussion comes from. Perhaps the authors see IIT as gaining ground to the extent that any other theory must now take it on directly. The effect is rather of a single bout from some giant knock-out tournament of theories of consciousness (I would totally go for that, incidentally; set it up, somebody!).

We sort of know about IIT by now, but what is SPC? The authors of the paper, Paul Thagard and Terrence C Stewart, suggest that:

consciousness is a neural process resulting from three mechanisms: representation by firing patterns in neural populations, binding of representations into more complex representations called semantic pointers, and competition among semantic pointers to capture the most important aspects of an organism’s current state.

I like the sound of this, and from the start it looks like a contender. My main problem with IIT is that, as was suggested last time, it seems easy enough to imagine that a whole lot of information could be integrated but remain uniluminated by consciousness; it feels as if there needs to be some other functional element; but if we supply that element it looks as if it will end up doing most of the interesting work and relegate the integration process to something secondary or even less important. SPC looks to be foregrounding the kind of process we really need.

The authors provide three basic hypotheses on which SPC rests;

H1. Consciousness is a brain process resulting from neural mechanisms.
H2. The crucial mechanisms for consciousness are: representation by patterns of firing in neural populations, binding of these representations into semantic pointers, and competition among semantic pointers.
H3. Qualitative experiences result from the competition won by semantic pointers that unpack into neural representations of sensory, motor, emotional, and verbal activity.

The particular mention of the brain in H1 is no accident. The authors stress that they are offering a theory of how brains work. Perhaps one day we’ll find aliens or robots who manage some form of consciousness without needing brains, but for now we’re just doing the stuff we know about. “…a theory of consciousness should not be expected to apply to all possible conscious entities.”

Well, actually, I’d sort of like it to – otherwise it raises questions about whether it really is consciousness itself we’re explaining. The real point here, I think, is meant to be a criticism of IIT, namely that it is so entirely substrate-neutral that it happily assigns consciousness to anything that is sufficiently filled with integrated information. Thagard and Stewart want to distance themselves from that, claiming it as a merit of their theory that it only offers consciousness to brains. I sympathise with that to a degree, but if it were me I’d take a slightly different line, resting on the actual functional features they describe rather than simple braininess. The substrate does have to be capable of doing certain things, but there’s no need to assume that only neurons could conceivably do them.

The idea of binding representations into ‘semantic pointers’ is intriguing and seems like the right kind of way to be going; what bothers me most here is how we get the representations in the first place. Not much attention is given to this in the current paper: Thagard and Stewart say neurons that interact with the world and with each other become “tuned” to regularities in the environment. That’s OK, but not really enough. It can’t be that mere interaction is enough, or everything would be a prolific representation of everything around it; but picking out the right “regularities” is a non-trivial task, arguably the real essence of representation.

Competition is the way particular pointers get selected to enter consciousness, according to H2; I’m not exactly sure how that works and I have doubts about whether open competition will do the job. One remarkable thing about consciousness is its coherence and direction, and unregulated competition seems unlikely to produce that, any more than a crowd of people struggling for access to a microphone would produce a fluent monologue. We can imagine that a requirement for coherence is built in, but the mechanism that judges coherence turns out to be rather important and rather difficult to explain.

So does SPC deliver? H3 claims that it gives rise to qualitative experience: the paper splits the issue into two questions: first, why are there all these different experiences, and second, why is there any experience at all? On the first, the answers are fairly good, but not particularly novel or surprising; a diverse range of sensory inputs and patterns of neural firing naturally give rise to a diversity of experience. On the second question, the real Hard Problem, we don’t really get anywhere; it’s suggested that actual experience is an emergent property of the three processes of consciousness. Maybe it is, but that doesn’t really explain it. I can’t seriously criticise Thagard and Stewart because no-one has really done any better with this; but I don’t see that SPC has a particular edge over IIT in this respect either.

Not that their claim to superiority rests on qualia; in fact they bring a range of arguments to suggest that SPC is better at explaining various normal features of consciousness. These vary in strength, in my opinion. First feature up is  how consciousness starts and stops. SPC has a good account, but I think IIT could do a reasonable job, too. The second feature is how consciousness shifts, and this seems a far stronger case; pointers naturally lend themselves better to thus than the gradual shifts you would at first sight expect from a mass of integrated information. Next we have a claim that SPC is better at explaining the different kinds or grades of consciousness that fifteen organisms presumably have. I suppose the natural assumption, given IIT, would be that you either have enough integration for consciousness or you don’t. Finally, it’s claimed that SPC is the winner when it comes to explaining the curious unity/disunity of consciousness. Clearly SPC has some built-in tools for binding, and the authors suggest that competition provides a natural source of fragmentation. They contrast this with Tononi’s concept of quantity of consciousness, an idea they disparage as meaningless in the face of the mental diversity of the organisms in the world.

As I say, I find some of these points stronger than others, but on the whole I think the broad claim that SPC gives a better picture is well founded. To me it seems the advantages of SPC mainly flow from putting representation and pointers at the centre. The dynamic quality this provides, and the spark of intentionality, make it better equipped to explain mental functions than the more austere apparatus of IIT. To me SPC is like a vehicle that needs overhauling and some additional components (some of those not readily available); it doesn’t run just now but you can sort of see how it would. IIT is more like an elegant sculptural form which doesn’t seem to have a place for the wheels.

lightChristof Koch declares himself a panpsychist in this interesting piece, but I don’t think he really is one. He subscribes to the Integrated Information Theory (IIT) of Giulio Tononi, which holds that consciousness is created by the appropriate integration of sufficient quantities of information. The level of integrated information can be mathematically expressed in a value called Phi: we have discussed this before a couple of times. I think this makes Koch an emergentist, but curiously enough he vigorously denies that.

Koch starts with a quotation about every outside having an inside which aptly brings out the importance of the first-person perspective in all these issues. It’s an implicit theme of what Koch says (in my reading at least) that consciousness is something extra. If we look at the issue from a purely third-person point of view, there doesn’t seem to be much to get excited about. Organisms exhibit different levels of complexity in their behaviour and it turns out that this complexity of behaviour arises from a greater complexity in the brain. You don’t say! The astonishment meter is still indicating zero. It’s only when we add in the belief that at some stage the inward light of consciousness, actual phenomenal experience, has come on that it gets interesting. It may be that Koch wants to incorporate panpsychism into his outlook to help provide that ineffable light, but attempting to make two theories work together is a risky path to take. I don’t want to accuse anyone of leaning towards dualism (which is the worst kind of philosophical bitchiness) but… well, enough said. I think Koch would do better to stick with the austere simplicity of IIT and say: that magic light you think you see is just integrated information. It may look a bit funny but that’s all it is, get used to it.

He starts off by arguing persuasively that consciousness is not the unique prerogative of human beings. Some, he says, have suggested that language is the dividing line, but surely some animals, preverbal infants and so on should not be denied consciousness? Well, no, but language might be interesting, not for itself but because it is an auxiliary effect of a fundamental change in brain organisation, one that facilitates the handling of abstract concepts, say (or one that allows the integration of much larger quantities of information, why not?). It might almost be a side benefit, but also a handy sign that this underlying reorganisation is in place, which would not be to say that you couldn’t have the reorganisation without having actual language. We would then have something, human-style thought, which was significantly different from the feelings of dogs, although the impoverishment of our vocabulary makes us call them both consciousness.

Still, in general the view that we’re dealing with a spectrum of experience, one which may well extend down to the presumably dim adumbrations of worms and insects, seems only sensible.

One appealing way of staying monist but allowing for the light of phenomenal experience is through emergence: at a certain level we find that the whole becomes more than the sum of its parts: we do sort of get something extra, but in an unobjectionable way. Strangely, Koch will have no truck with this kind of thinking. He says

‘the mental is too radically different for it to arise gradually from the physical’.

At first sight this seemed to me almost a direct contradiction of what he had just finished saying. The spectrum of consciousness suggests that we start with the blazing 3D cinema projector of the human mind, work our way down to the magic lanterns of dogs, the candles of newts, and the faint tiny glows of worms – and then the complete darkness of rocks and air. That suggests that consciousness does indeed build up gradually out of nothing, doesn’t it? An actual panpsychist, moreover, pushes the whole thing further, so that trees have faint twinkles and even tiny pieces of clay have a detectable scintilla.

Koch’s view is not, in fact, contradictory: what he seems to want is something like one of those dimmer switches that has a definite on and off, but gradations of brightness when on. He’s entitled to take that view, but I don’t think I agree that gradual emergence of consciousness is unimaginable. Take the analogy of a novel. We can start with Pride and Prejudice, work our way down through short stories or incoherent first drafts, to recipe books or collections of limericks, books with scribble and broken sentences, down to books filled with meaningless lines, and the chance pattern of cracks on a wall. All the way along there will be debatable cases, and contrarians who disbelieve in the real existence of literature can argue against the whole thing (‘You need to exercise your imagination to make Pride and Prejudice a novel; but if you are willing to use your imagination I can tell you there are finer novels in the cracks on my wall than anything Jane bloody Austen ever wrote…’) : but it seems clear enough to me that we can have a spectrum all the way down to nothing. That doesn’t prove that consciousness is like that, but makes it hard to assert that it couldn’t be.
The other reason it seems odd to hear such an argument from Koch is that he espouses the IIT which seems to require a spectrum which sits well with emergentism. Presumably on Koch’s view a small amount of integrated information does nothing, but at some point, when there’s enough being integrated, we start to get consciousness? Yet he says:

“if there is nothing there in the first place, adding a little bit more won’t make something. If a small brain won’t be able to feel pain, why should a large brain be able to feel the god-awfulness of a throbbing toothache? Why should adding some neurons give rise to this ineffable feeling?”

Well, because a small brain only integrates a small amount of information, whereas a large on integrates enough for full consciousness? I think I must be missing something here, but look at this.

“ [Consciousness] is a property of complex entities and cannot be further reduced to the action of more elementary properties. We have reached the ground floor of reductionism.”

Isn’t that emergence? Koch must see something else which he thinks is essential to emergentism which he doesn’t like, but I’m not seeing it.

The problem with Koch being panpsychist is that for panpsychists souls (or in this case consciousness) have to be everywhere. Even a particle of stone or a screwed-up sheet of wrapping paper must have just the basic spark; the lights must be at least slightly on. Koch doesn’t want to go quite that far – and I have every sympathy with that, but it means taking the pan out of the panpsychist. Koch fully recognises that he isn’t espousing traditional full-blooded panpsychism but in my opinion he deviates too far to be entitled to the badge. What Koch believes is that everything has the potential to instantiate consciousness when correctly organised and integrated. That amounts to no more than believing in the neutrality of the substrate, that neurons are not essential and that consciousness can be built with anything so long as its functional properties are right. All functionalists and a lot of other people (not everyone, of course) believe that without being panpsychists.

Perhaps functionalism is really the direction Koch’s theories lean towards. After all, it’s not enough to integrate information in any superficial way. A big database which exhaustively cross-referenced the Library of Congress would not seem much of a candidate for consciousness. Koch realises that there have to be some rules about what kinds of integration matter, but I think that if the theory develops far enough these other constraints will play an increasingly large role, until eventually we find that they have taken over the theory and the quantity of integrated information has receded to the status of a necessary but not sufficient condition.

I suppose that that might still leave room for Tononi’s Phi meter, now apparently built, to work satisfactorily. I hope it does, because it would be pretty useful.

Giulio Tononi’s Phi is an extraordinary book.  It’s heavy, and I mean that literally: presumably because of the high quality glossy paper, it is noticeably weighty in the hand; not one I’d want to hold up before my eyes for long without support, though perhaps my wrists have been weakened by habitual Kindle use.

That glossy paper is there for the vast number of sumptuous pictures with which the book is crammed; mainly great works of art, but also scientific scans and diagrams (towards the end a Pollock-style painting and a Golgi-Cox image of real neurons are amusingly juxtaposed: you really can barely tell which is which). What is going on with all this stuff?

My theory is that the book reflects a taste conditioned by internet use. The culture of the World Wide Web is quotational and referential: it favours links to good stuff and instant impact. In putting together a blog authors tend to gather striking bits and pieces rather the way a bower bird picks up brightly coloured objects to enhance its display ground, without worrying too much about coherence or context. (If we were pessimistic we might take this as a sign that our culture, like classical Greek culture before it, is moving away from the era of original thought into an age of encyclopedists; myself I’m not that gloomy –  I think that however frothy the Internet may get in places it’s all extra and mostly beneficial.) Anyway, that’s a bit what this book is like; a site decorated with tons of ‘good stuff’ trawled up from all over, and in that slightly uncomplimentary sense it’s very modern.

You may have guessed that I’m not sure I like this magpie approach.  The pictures are forced into a new context unrelated to what the original artist had in mind, one in which they jostle for space, and many are edited or changed, sometimes rather crudely (I know: I should talk, when it comes to crude editing of borrowed images – but there we are). The choice of image skews towards serious art (no cartoons here) and towards the erotic, scary, or grotesque. Poor Maddalena Svenuta gets tipped on her back, perhaps to emphasise the sexual overtones of the painting – although they are unignorable enough in the original orientation. This may seem to suggest a certain lack of respect for sources and certainly produces a rather indigestible result; but perhaps we ought to cut Tononi a bit of slack. The overflowing cornucopia of images seems to reflect his honest excitement and enthusiasm: he may, who knows, be pioneering a new form which we need time to get used to; and like an over-stuffed blog, the overwhelming gallimaufry is likely here and there to introduce any reader to new things worth knowing about. Besides the images the text itself is crammed with disguised quotes and allusions.  Prepare to be shocked: there is no index.

I’m late to the party here. Gary Williams has made some sharp-eyed observations on the implicit panpsychism of Tononi’s views;  Scott Bakker rather liked the book and the way some parts of Tononi’s theory chimed with his own Blind Brain theory (more on that another time, btw). Scott, however, raised a ‘quibble’ about sexism: I think he must have in mind this hair-raising sentence in the notes to Chapter 29:

At the end, Emily Dickinson saves the day with one of those pronouncements that show how poets (or women) have deeper intuition of what is real than scientists (or men) ever might: internal difference, where all the meanings are.

Ouch, indeed: but I don’t think this is meant to be Tononi speaking.

The book is arranged to resemble Dante’s Divine Comedy in a loose way: Galileo is presented as the main character, being led through dreamlike but enlightening encounters in three main Parts, which in this case present in turn, more or less, the facts about brain and mind – the evidence, the theory of Phi, and the implications. Galileo has a different guide in each Part: first someone who is more or less Francis Crick, then someone who is more or less Alan Turing, and finally for reasons I couldn’t really fathom, someone who is more or less Charles Darwin (a bit of an English selection, as the notes point out); typically each chapter involves an encounter with some notable personality in the midst of an illuminating experience or experiment; quite often, as Tononi frankly explains, one that probably did not feature in their real lives. Each chapter ends with notes that set out the source of images and quotations and give warnings about any alterations: the notes also criticise the chapter, its presentation, and the attitudes of the personalities involved, often accusing them of arrogance and taking a very negative view of the presumed author’s choices. I presume the note writer is, as it were, a sock puppet, and I suppose this provides an entertaining way for Tononi to voice the reservations he feels about the main text, backing up the dialogues within that text with a kind of one-sided meta-textual critique.

Dialogue is a long-established format for philosophy and has certain definite advantages: in particular it allows an author to set out different cases with full vigour without a lot of circumlocution and potential confusion. I think on the whole it works here, though I must admit some reservation about having Galileo put into the role of the naive explorer. I sort of revere Galileo as a man whose powers of observation and analysis were truly extraordinary, and personally I wouldn’t dare put words into his mouth, let alone thoughts into his head: I’d have preferred someone else: perhaps a fictional Lemuel Gulliver figure. It makes it worse that while other characters have their names lightly disguised (which I take to be in part a graceful acknowledgement that they are really figments of Tononi) Galileo is plain Galileo.

Why has Tononi produced such an unusual book? Isn’t there a danger that this will actually cause his Integrated Information Theory to be taken less seriously in some quarters? I think to Tononi the theory is both exciting and illuminating, with the widest of implications, and that’s what he wants to share with us. At times I’m afraid that enthusiasm and the torrent of one damn thing after another became wearing for me and made the book harder to read: but in general it cannot help but be engaging.

The theory, moreover, has a lot of good qualities. We’ve discussed it before: in essence Tononi suggests that consciousness arises where sufficient information is integrated. Even a small amount may yield a spark of awareness, but the more we integrate, the greater the level of consciousness. Integrated potential is as important as integrated activity: the fact that darkness is not blue and not loud and not sweet-tasting makes it, for us, a far more complex thing that it could ever be to an entity that lacked the capacity for those perceptions.  It’s this role of absent or subliminal qualities that make qualia seem so ineffable.

This makes more sense than some theories I’ve read but for me it’s still somewhat problematic. I’m unclear about the status of the ‘information’ we’re integrating and I don’t really understand what the integration amounts to, either. Tononi starts out with information in the unobjectionable sense defined by Shannon, but he seems to want it to do things that Shannon was clear it couldn’t. He talks about information having meaning when seen from the inside, but what’s this inside and how did it get there? He says that when a lot of information is aggregated it generates new information – hard to resist the idea that in the guise of ‘new information’ he is smuggling in a meaningfulness that Shannonian information simply doesn’t have.  The suggestion that inactive bits of the system may be making important contributions just seems to make it worse. It’s one thing for some neural activity to be subliminal or outside the zone of consciousness: it’s quite different for neurons that don’t fire to be contributing to experience. What’s the functional difference between neurons that don’t fire and those that don’t exist? Is it about the possibility that the existing ones could have fired? I don’t even want to think about the problems that raises.

I don’t like the idea of qualia space, another of Tononi’s concepts, either. As Dennett nearly said, what qualia space? To have an orderly space of this kind you must be able to reduce the phenomenon in question to a set of numerical variables which can be plotted along axes. Nobody can do this with qualia; nobody knows if it is even possible in principle. When Wundt and his successors set out to map the basic units of subjective experience, they failed to reach agreement, as Tononi mentions. As an aspiration qualia space might be reasonable, but you cannot just assume it’s OK, and doing so raises a fear that Tononi has unconsciously slipped from thinking about real qualia to thinking about sense-data or some other tractable proxy. People do that a lot, I’m afraid.

One implication of the theory which I don’t much like is the sliding scale of consciousness it provides. If the level of consciousness relates to the quantity of information integrated, then it is infinitely variable, from the extremely dim awareness of a photodiode up through flatworms to birds, humans and – why not – to hypothetical beings whose consciousness far exceeds our own. Without denying that consciousness can be clear or dim, I prefer to think that in certain important respects there are plateaux: that for moral purposes, in particular, enough is enough. A certain level of consciousness is necessary for the awareness of pain, but being twice as bright doesn’t make my feelings twice as great. I need a certain level of consciousness to be responsible for my own actions, but having a more massive brain doesn’t thereafter make me more responsible. Not, of course, that Tononi is saying that, exactly: but if super-brained aliens land one day and tell us that their massive information capacity means their interests take precedence over ours, I hope Tononi isn’t World President.

All that said, I ought to concede that in broad terms I think it’s quite likely Tononi is right: it probably is the integration of information that gives rise to consciousness. We just need more clarity about how – and about what that actually means.


Picture: Phi. I was wondering recently what we could do with all the new computing power which is becoming available.  One answer might be calculating phi, effectively a measure of consciousness, which was very kindly drawn to my attention by Christof Koch. Phi is actually a time- and state-dependent measure of integrated information developed by Giulio Tononi in support of the Integrated Information Theory (IIT) of consciousness which he and Koch have championed.  Some readable expositions of the theory are here and here with the manifesto here and a formal paper presenting phi here. Koch says the theory is the most exciting conceptual development he’s seen in “the inchoate science of consciousness”, and I can certainly see why.

The basic premise of the theory is simply that consciousness is constituted by integrated information. It stems from the phenomenological observations that there are vast numbers of possible conscious states, and that each of them appears to unify or integrate a very large number of items of information. What really lifts the theory above the level of most others in this area is the detailed mathematical under-pinning, which means phi is not a vague concept but a clear and possibly even a practically useful indicator.

One implication of the theory is that consciousness lies on a continuum: rather than being an on-or-off matter, it comes in degrees. The idea that lower levels of consciousness may occur when we are half-awake, or in dogs or other animals, is plausible and appealing. Perhaps a little less intuitive is the implication that there must be in theory be higher states of consciousness than any existing human being could ever have attained. I don’t think this means states of greater intelligence or enlightenment, necessarily; it’s more  a matter of being more awake than awake, an idea which (naturally enough, I suppose) is difficult to get one’s head around, but has a tantalising appeal.

Equally, the theory implies that some minimal level of consciousness goes a long way down to systems with only a small quantity of integrated information. As Koch points out, this looks like a variety of panpsychism or panexperientialism, though I think the most natural interpretation is that real consciousness probably does not extend all that far beyond observably animate entities.

One congenial aspect of the theory for me is that it puts causal relations at the centre of things: while a system with complex causal interactions may generate a high value of phi, a ‘replay’ of its surface dynamics would not. This seems to capture in a clearer form the hand-waving intuitive point I was making recently in discussion of Mark Muhlestein’s ideas.  Unfortunately calculation of Phi for the human brain remains beyond reach at the moment due to the unmanageable levels of complexity involved;  this is disappointing, but in a way it’s only what you would expect. Nevertheless, there is, unusually in this field, some hope of empirical corroboration.

I think I’m convinced that phi measures something interesting and highly relevant to consciousness; perhaps it remains to be finally established that what it measures is consciousness itself, rather than some closely associated phenomenon, some necessary but not sufficient condition. Your view about this, pending further evidence, may be determined by how far you think phenomenal experience can be identified with information. Is consciousness in the end what information – integrated information – just feels like from the inside? Could this be the final answer to the insoluble question of qualia? The idea doesn’t strike me with the ‘aha!’ feeling of the blinding insight, but (and this is pretty good going in this field) it doesn’t seem obviously wrong either.  It seems the right kind of answer, the kind that could be correct.

Could it?