Information and Experience

You can’t build experience out of mere information. Not, at any rate, the way the Integrated Information Theory (IIT) seeks to do it. So says Garrett Mindt in a forthcoming paper for the JCS.

‘Information’ is notoriously a slippery term, and much depends on how you’re using it. Commonly people distinguish the everyday meaning, which makes information a matter of meaning or semantics, and the sense defined by Shannon, which is statistical and excludes meaning, but is rigorous and tractable.

It is a fairly common sceptical claim that you cannot get consciousness, or anything like intentionality or meaning, out of Shannon-style information. Mindt describes in his paper a couple of views that attack IIT on similar grounds. One is by Cerullo, who says:

‘Only by including syntactic, and most importantly semantic, concepts can a theory of information hope to model the causal properties of the brain…’

The other is by Searle, who argues that information, correctly understood, is observer dependent. The fact that this post, for example, contains information depends on conscious entities interpreting it as such, or it would be mere digital noise. Since information, defined this way, requires consciousness, any attempt to derive consciousness from it must be circular.

Although Mindt is ultimately rather sympathetic to both these cases, he says they fail because they assume that IIT is working with a Shannonian conception of information: but that’s not right. In fact IIT invokes a distinct causal conception of information as being ‘a difference that makes a difference’. A system conveys information, in this sense, if it can induce a change in the state of another system. Mindt likens this to the concept of information introduced by Bateson.

Mindt makes the interesting point that Searle and others tend to carve the problem up by separating syntax from semantics; but it’s not clear that semantics is required for hard-problem style conscious experience (in fact I think the question of what, if any, connection there is between the two is puzzling and potentially quite interesting). Better to use the distinction favoured by Tononi in the context of IIT, between extrinsic information – which covers both syntax and semantics – and intrinsic, which covers structure, dynamics, and phenomenal aspects.

Still, Mindt finds IIT vulnerable to a slightly different attack. Even with the clarifications he has made, the theory remains one of structure and dynamics, and physicalist structure and dynamics just don’t look like the sort of thing that could ever account for the phenomenal qualities of experience. There is no theoretical bridge arising from IIT that could take us across the explanatory gap.

I think the case is well made, although unfortunately it may be a case for despair. If this objection stands for IIT then it most likely stands for all physicalist theories. This is a little depressing because on one point of view, non-physicalist theories look unattractive. From that perspective, coming up with a physical explanation of phenomenal experience is exactly the point of the whole enquiry; if no such explanation is possible, no decent answer can ever be given.

It might still be the case that IIT is the best theory of its kind, and that it is capable of explaining many aspects of consciousness. We might even hope to squeeze the essential Hard Problem to one side. What if IIT could never explain why the integration of information gives rise to experience, but could explain everything, or most things, about the character of experience? Might we not then come to regard the Hard Problem as one of those knotty tangles that philosophers can mull over indefinitely, while the rest of us put together a perfectly good practical understanding of how mind and brain work?

I don’t know what Mindt would think about that, but he rounds out his case by addressing one claimed prediction of IIT; namely that if a large information complex is split, the attendant consciousness will also divide. This looks like what we might see in split-brain cases, although so far as I can see, nobody knows whether split-brain patients have two separate sets of phenomenal experiences, and I’m not sure there’s any way of testing the matter. Mindt points out that the prediction is really a matter of  ‘Easy Problem’ issues and doesn’t help otherwise: it’s also not an especially impressive prediction, as many other possible theories would predict the same thing.

Mindt’s prescription is that we should go back and have another try at that definition of information; without attempting to do that he smiles on dual aspect theories. I’m afraid I am left scowling at all of them; as always in this field the arguments against any idea seem so much better than the ones for.

 

Doorknobs and Intuition

doorknob‘…stupid as a doorknob…’ Just part of Luboš Motl’s vigorous attack on Scott Aaronson’s critique of IIT, the Integrated Information Theory of Giulio Tononi.

To begin at the beginning. IIT says that consciousness arises from integrated information, and proposes a mathematical approach to quantifying the level of integrated information in a system, a quantity it names Phi (actually there are several variant ways to define Phi that differ in various details, which is perhaps unfortunate). Aaronson and Motl both describe this idea as a worthy effort but both have various reservations about it – though Aaronson thinks the problems are fatal while Motl thinks IIT offers a promising direction for further work.

Both pieces contain a lot of interesting side discussion, including Aaronson’s speculation that approximating phi for a real brain might be an NP-hard problem. This is the digression that prompted the doorknob comment: so what if it were NP-hard, demands Motl; you think nature is barred from containing NP-hard problems?

The real crux as I understand it is Aaronson’s argument that we can give examples of systems with high scores for Phi that we know intuitively could not be conscious. Eric Schwitzgebel has given a somewhat similar argument but cast in more approachable terms; Aaronson uses a Vandermonde matrix for his example of a high-phi but intuitively non-conscious entity, whereas Schwitzgebel uses the United States.

Motl takes exception to Aaronson’s use of intuition here. How does he know that his matrix lacks consciousness? If Aaronson’s intuition is the test, what’s the point of having a theory? The whole point of a theory is to improve on and correct our intuitive judgements, isn’t it? If we’re going to fall back on our intuitions argument is pointless.

I think appeals to intuition are rare in physics, where it is probably natural to regard them as illegitimate, but they’re not that unusual in philosophy, especially in ethics. You could argue that G.E. Moore’s approach was essentially to give up on ethical theory and rely on intuition instead. Often intuition limits what we regard as acceptable theorising, but our theories can also ‘tutor’ and change our intuitions. My impression is that real world beliefs about death, for example, have changed substantially in recent decades under the influence of utilitarian reasoning; we’re now much less likely to think that death is simply forbidden and more likely to accept calculations about the value of lives. We still, however, rule out as unintuitive (‘just obviously wrong’) such utilitarian conclusions as the propriety of sometimes punishing the innocent.

There’s an interesting question as to whether there actually is, in itself, such a thing as intuition. Myself I’d suggest the word covers any appealing pre-rational thought; we use it in several ways. One is indeed to test our conclusions where no other means is available; it’s interesting that Motl actually remarks that the absence of a reliable objective test of consciousness is one of IIT’s problems; he obviously does not accept that intuition could be a fall-back, so he is presumably left with the gap (which must surely affect all theories of consciousness). Philosophers also use an appeal to intuition to help cut to the chase, by implicitly invoking shared axioms and assumptions; and often enough ‘thought experiments’ which are not really experiments at all but in the Dennettian phrase ‘intuition pumps’ are used for persuasive effect; they’re not proofs but they may help to get people to agree.

Now as a matter of fact I think in Aaronson’s case we can actually supply a partial argument to replace pure intuition. In this discussion we are mainly talking about subjective consciousness, the ‘something it is like’ to experience things. But I think many people would argue that that Hard Problem consciousness requires the Easy Problem kind to be in place first as a basis. Subjective experience, we might argue, requires the less mysterious apparatus of normal sensory or cognitive experience; and Aaronson (or Schwitzgebel) could argue that their example structures definitely don’t have the sort of structure needed for that, a conclusion we can reach through functional argument without the need for intuition,

Not everybody would agree, though; some, especially those who lean towards panpsychism and related theories of ‘consciousness everywhere’ might see nothing wrong with the idea of subjective consciousness without the ‘mechanical’ kind. The standard philosophical zombie has Easy Problem consciousness without qualia; these people would accept an inverted zombie who has qualia with no brain function. It seems a bit odd to me to pair such a view with IIT (if you don’t think functional properties are required I’d have thought you would think that integrating information was also dispensable) but there’s nothing strictly illogical about it. Perhaps the dispute over intuition really masks a different disagreement, over the plausibility of such inverted zombies, obviously impossible in  Aaronson’s eyes, but potentially viable in Motl’s?

Motl goes on to offer what I think is a rather good objection to IIT as it stands; ie that it seems to award consciousness to ‘frozen’ or static structures if they have a high enough Phi score. He thinks it’s necessary to reformulate the idea to capture the point that consciousness is a process. I agree – but how does Motl know consciousness requires a process? Could it be that it’s just…  intuitively obvious?

Phlegm theory

humoursWorse than wrong? A trenchant piece from Michael Graziano likens many theories of consciousness to the medieval theory of humours; in particular the view that laziness is due to a build up of phlegm. It’s not that the theory is wrong, he says – though it is – it’s that it doesn’t even explain anything.

To be fair I think the theory of the humours was a little more complex than that, and there is at least some kind of hand-waving explanatory connection between the heaviness of phlegm and slowness of response. According to Graziano such theories flatter our intuitions; they offer a vague analogy which feels metaphorically sort of right – but, on examination, no real mechanism. His general point is surely very sound; there are indeed too many theories about conscious experience that describe a reasonably plausible process without ever quite explaining how the process magically gives rise to actual feeling, to the ineffable phenomenology.

As an example, Graziano mentions a theory that neural oscillations are responsible for consciousness; I think he has in mind the view espoused by Francis Crick and others that oscillations at 40 hertz give rise to awareness. This idea was immensely popular at one time and people did talk about “40 hertz” as though it was a magic key. Of course it would have been legitimate to present this as an enigmatic empirical finding, but the claim seemed to be that it was an answer rather than an additional question. So far as I know Graziano is right to say that no-one ever offered a clear view as to why 40 hertz had this exceptional property, rather than 30 or 50, or for that matter why co-ordinated oscillation at any frequency should generate consciousness. It is sort of plausible that harmonising on a given frequency might make parts of the brain work together in some ways, and people sometimes took the view that synchronised firing might, for example, help explain the binding problem – the question of how inputs from different senses arriving at different times give rise to a smooth and flawlessly co-ordinated experience. Still, at best working in harmony might explain some features of experience: it’s hard to see how in itself it could provide any explanation of the origin or essential nature of consciousness. It just isn’t the right kind of thing.

As a second example Graziano boldly denounces theories based on integrated information. Yes, consciousness is certainly going to require the integration of a lot of information, but that seems to be a necessary, not a sufficient condition. Intuitively we sort of imagine a computer getting larger and more complex until, somehow, it wakes up. But why would integrating any amount of information suddenly change its inward nature? Graziano notes that some would say dim sparks of awareness are everywhere, so that linking them gives us progressively brighter arrays. That, however, is no explanation, just an even worse example of phlegm.

So how does Graziano explain consciousness? He concedes that he too has no brilliant resolution of the central mystery. He proposes instead a project which asks, not why we have subjective experience, but why we think we do: why we say we do with such conviction. The answer, he suggests, is in metacognition. (This idea will not be new to readers who are acquainted with Scott Bakker’s Blind Brain Theory.) The mind makes models of the world and models of itself, and it is these inaccurate models and the information we generate from them that makes us see something magic about experience. In the brief account here I’m not really sure Graziano succeeds in making this seem more clear-cut than the theories he denounces. I suppose the parallel existence of reality and a mental model of reality might plausibly give rise to an impression that there is something in our experience over and above simple knowledge of the world; but I’m left a little nervous about whether that isn’t another example of the kind of intuition-flattering the other theories provide.

This kind of metacognitive theory tends naturally to be a sceptical theory; our conviction that we have subjective experience proceeds from an error or a defective model, so the natural conclusion, on grounds of parsimony if no others, is that we are mistaken and there is really nothing special about our brain’s data processing after all.

That may be the natural conclusion, but in other respects it’s hard to accept. It’s easy to believe that we might be mistaken about what we’re experiencing, but can we doubt that we’re having an experience of some kind? We seem to run into quasi-Cartesian difficulties.

Be that as it may Graziano deserves a round of applause for his bold (but not bilious) denunciation of the phlegm.

Not a panpsychist but an emergentist?

lightChristof Koch declares himself a panpsychist in this interesting piece, but I don’t think he really is one. He subscribes to the Integrated Information Theory (IIT) of Giulio Tononi, which holds that consciousness is created by the appropriate integration of sufficient quantities of information. The level of integrated information can be mathematically expressed in a value called Phi: we have discussed this before a couple of times. I think this makes Koch an emergentist, but curiously enough he vigorously denies that.

Koch starts with a quotation about every outside having an inside which aptly brings out the importance of the first-person perspective in all these issues. It’s an implicit theme of what Koch says (in my reading at least) that consciousness is something extra. If we look at the issue from a purely third-person point of view, there doesn’t seem to be much to get excited about. Organisms exhibit different levels of complexity in their behaviour and it turns out that this complexity of behaviour arises from a greater complexity in the brain. You don’t say! The astonishment meter is still indicating zero. It’s only when we add in the belief that at some stage the inward light of consciousness, actual phenomenal experience, has come on that it gets interesting. It may be that Koch wants to incorporate panpsychism into his outlook to help provide that ineffable light, but attempting to make two theories work together is a risky path to take. I don’t want to accuse anyone of leaning towards dualism (which is the worst kind of philosophical bitchiness) but… well, enough said. I think Koch would do better to stick with the austere simplicity of IIT and say: that magic light you think you see is just integrated information. It may look a bit funny but that’s all it is, get used to it.

He starts off by arguing persuasively that consciousness is not the unique prerogative of human beings. Some, he says, have suggested that language is the dividing line, but surely some animals, preverbal infants and so on should not be denied consciousness? Well, no, but language might be interesting, not for itself but because it is an auxiliary effect of a fundamental change in brain organisation, one that facilitates the handling of abstract concepts, say (or one that allows the integration of much larger quantities of information, why not?). It might almost be a side benefit, but also a handy sign that this underlying reorganisation is in place, which would not be to say that you couldn’t have the reorganisation without having actual language. We would then have something, human-style thought, which was significantly different from the feelings of dogs, although the impoverishment of our vocabulary makes us call them both consciousness.

Still, in general the view that we’re dealing with a spectrum of experience, one which may well extend down to the presumably dim adumbrations of worms and insects, seems only sensible.

One appealing way of staying monist but allowing for the light of phenomenal experience is through emergence: at a certain level we find that the whole becomes more than the sum of its parts: we do sort of get something extra, but in an unobjectionable way. Strangely, Koch will have no truck with this kind of thinking. He says

‘the mental is too radically different for it to arise gradually from the physical’.

At first sight this seemed to me almost a direct contradiction of what he had just finished saying. The spectrum of consciousness suggests that we start with the blazing 3D cinema projector of the human mind, work our way down to the magic lanterns of dogs, the candles of newts, and the faint tiny glows of worms – and then the complete darkness of rocks and air. That suggests that consciousness does indeed build up gradually out of nothing, doesn’t it? An actual panpsychist, moreover, pushes the whole thing further, so that trees have faint twinkles and even tiny pieces of clay have a detectable scintilla.

Koch’s view is not, in fact, contradictory: what he seems to want is something like one of those dimmer switches that has a definite on and off, but gradations of brightness when on. He’s entitled to take that view, but I don’t think I agree that gradual emergence of consciousness is unimaginable. Take the analogy of a novel. We can start with Pride and Prejudice, work our way down through short stories or incoherent first drafts, to recipe books or collections of limericks, books with scribble and broken sentences, down to books filled with meaningless lines, and the chance pattern of cracks on a wall. All the way along there will be debatable cases, and contrarians who disbelieve in the real existence of literature can argue against the whole thing (‘You need to exercise your imagination to make Pride and Prejudice a novel; but if you are willing to use your imagination I can tell you there are finer novels in the cracks on my wall than anything Jane bloody Austen ever wrote…’) : but it seems clear enough to me that we can have a spectrum all the way down to nothing. That doesn’t prove that consciousness is like that, but makes it hard to assert that it couldn’t be.
The other reason it seems odd to hear such an argument from Koch is that he espouses the IIT which seems to require a spectrum which sits well with emergentism. Presumably on Koch’s view a small amount of integrated information does nothing, but at some point, when there’s enough being integrated, we start to get consciousness? Yet he says:

“if there is nothing there in the first place, adding a little bit more won’t make something. If a small brain won’t be able to feel pain, why should a large brain be able to feel the god-awfulness of a throbbing toothache? Why should adding some neurons give rise to this ineffable feeling?”

Well, because a small brain only integrates a small amount of information, whereas a large on integrates enough for full consciousness? I think I must be missing something here, but look at this.

“ [Consciousness] is a property of complex entities and cannot be further reduced to the action of more elementary properties. We have reached the ground floor of reductionism.”

Isn’t that emergence? Koch must see something else which he thinks is essential to emergentism which he doesn’t like, but I’m not seeing it.

The problem with Koch being panpsychist is that for panpsychists souls (or in this case consciousness) have to be everywhere. Even a particle of stone or a screwed-up sheet of wrapping paper must have just the basic spark; the lights must be at least slightly on. Koch doesn’t want to go quite that far – and I have every sympathy with that, but it means taking the pan out of the panpsychist. Koch fully recognises that he isn’t espousing traditional full-blooded panpsychism but in my opinion he deviates too far to be entitled to the badge. What Koch believes is that everything has the potential to instantiate consciousness when correctly organised and integrated. That amounts to no more than believing in the neutrality of the substrate, that neurons are not essential and that consciousness can be built with anything so long as its functional properties are right. All functionalists and a lot of other people (not everyone, of course) believe that without being panpsychists.

Perhaps functionalism is really the direction Koch’s theories lean towards. After all, it’s not enough to integrate information in any superficial way. A big database which exhaustively cross-referenced the Library of Congress would not seem much of a candidate for consciousness. Koch realises that there have to be some rules about what kinds of integration matter, but I think that if the theory develops far enough these other constraints will play an increasingly large role, until eventually we find that they have taken over the theory and the quantity of integrated information has receded to the status of a necessary but not sufficient condition.

I suppose that that might still leave room for Tononi’s Phi meter, now apparently built, to work satisfactorily. I hope it does, because it would be pretty useful.

Oh, Phi!

Giulio Tononi’s Phi is an extraordinary book.  It’s heavy, and I mean that literally: presumably because of the high quality glossy paper, it is noticeably weighty in the hand; not one I’d want to hold up before my eyes for long without support, though perhaps my wrists have been weakened by habitual Kindle use.

That glossy paper is there for the vast number of sumptuous pictures with which the book is crammed; mainly great works of art, but also scientific scans and diagrams (towards the end a Pollock-style painting and a Golgi-Cox image of real neurons are amusingly juxtaposed: you really can barely tell which is which). What is going on with all this stuff?

My theory is that the book reflects a taste conditioned by internet use. The culture of the World Wide Web is quotational and referential: it favours links to good stuff and instant impact. In putting together a blog authors tend to gather striking bits and pieces rather the way a bower bird picks up brightly coloured objects to enhance its display ground, without worrying too much about coherence or context. (If we were pessimistic we might take this as a sign that our culture, like classical Greek culture before it, is moving away from the era of original thought into an age of encyclopedists; myself I’m not that gloomy –  I think that however frothy the Internet may get in places it’s all extra and mostly beneficial.) Anyway, that’s a bit what this book is like; a site decorated with tons of ‘good stuff’ trawled up from all over, and in that slightly uncomplimentary sense it’s very modern.

You may have guessed that I’m not sure I like this magpie approach.  The pictures are forced into a new context unrelated to what the original artist had in mind, one in which they jostle for space, and many are edited or changed, sometimes rather crudely (I know: I should talk, when it comes to crude editing of borrowed images – but there we are). The choice of image skews towards serious art (no cartoons here) and towards the erotic, scary, or grotesque. Poor Maddalena Svenuta gets tipped on her back, perhaps to emphasise the sexual overtones of the painting – although they are unignorable enough in the original orientation. This may seem to suggest a certain lack of respect for sources and certainly produces a rather indigestible result; but perhaps we ought to cut Tononi a bit of slack. The overflowing cornucopia of images seems to reflect his honest excitement and enthusiasm: he may, who knows, be pioneering a new form which we need time to get used to; and like an over-stuffed blog, the overwhelming gallimaufry is likely here and there to introduce any reader to new things worth knowing about. Besides the images the text itself is crammed with disguised quotes and allusions.  Prepare to be shocked: there is no index.

I’m late to the party here. Gary Williams has made some sharp-eyed observations on the implicit panpsychism of Tononi’s views;  Scott Bakker rather liked the book and the way some parts of Tononi’s theory chimed with his own Blind Brain theory (more on that another time, btw). Scott, however, raised a ‘quibble’ about sexism: I think he must have in mind this hair-raising sentence in the notes to Chapter 29:

At the end, Emily Dickinson saves the day with one of those pronouncements that show how poets (or women) have deeper intuition of what is real than scientists (or men) ever might: internal difference, where all the meanings are.

Ouch, indeed: but I don’t think this is meant to be Tononi speaking.

The book is arranged to resemble Dante’s Divine Comedy in a loose way: Galileo is presented as the main character, being led through dreamlike but enlightening encounters in three main Parts, which in this case present in turn, more or less, the facts about brain and mind – the evidence, the theory of Phi, and the implications. Galileo has a different guide in each Part: first someone who is more or less Francis Crick, then someone who is more or less Alan Turing, and finally for reasons I couldn’t really fathom, someone who is more or less Charles Darwin (a bit of an English selection, as the notes point out); typically each chapter involves an encounter with some notable personality in the midst of an illuminating experience or experiment; quite often, as Tononi frankly explains, one that probably did not feature in their real lives. Each chapter ends with notes that set out the source of images and quotations and give warnings about any alterations: the notes also criticise the chapter, its presentation, and the attitudes of the personalities involved, often accusing them of arrogance and taking a very negative view of the presumed author’s choices. I presume the note writer is, as it were, a sock puppet, and I suppose this provides an entertaining way for Tononi to voice the reservations he feels about the main text, backing up the dialogues within that text with a kind of one-sided meta-textual critique.

Dialogue is a long-established format for philosophy and has certain definite advantages: in particular it allows an author to set out different cases with full vigour without a lot of circumlocution and potential confusion. I think on the whole it works here, though I must admit some reservation about having Galileo put into the role of the naive explorer. I sort of revere Galileo as a man whose powers of observation and analysis were truly extraordinary, and personally I wouldn’t dare put words into his mouth, let alone thoughts into his head: I’d have preferred someone else: perhaps a fictional Lemuel Gulliver figure. It makes it worse that while other characters have their names lightly disguised (which I take to be in part a graceful acknowledgement that they are really figments of Tononi) Galileo is plain Galileo.

Why has Tononi produced such an unusual book? Isn’t there a danger that this will actually cause his Integrated Information Theory to be taken less seriously in some quarters? I think to Tononi the theory is both exciting and illuminating, with the widest of implications, and that’s what he wants to share with us. At times I’m afraid that enthusiasm and the torrent of one damn thing after another became wearing for me and made the book harder to read: but in general it cannot help but be engaging.

The theory, moreover, has a lot of good qualities. We’ve discussed it before: in essence Tononi suggests that consciousness arises where sufficient information is integrated. Even a small amount may yield a spark of awareness, but the more we integrate, the greater the level of consciousness. Integrated potential is as important as integrated activity: the fact that darkness is not blue and not loud and not sweet-tasting makes it, for us, a far more complex thing that it could ever be to an entity that lacked the capacity for those perceptions.  It’s this role of absent or subliminal qualities that make qualia seem so ineffable.

This makes more sense than some theories I’ve read but for me it’s still somewhat problematic. I’m unclear about the status of the ‘information’ we’re integrating and I don’t really understand what the integration amounts to, either. Tononi starts out with information in the unobjectionable sense defined by Shannon, but he seems to want it to do things that Shannon was clear it couldn’t. He talks about information having meaning when seen from the inside, but what’s this inside and how did it get there? He says that when a lot of information is aggregated it generates new information – hard to resist the idea that in the guise of ‘new information’ he is smuggling in a meaningfulness that Shannonian information simply doesn’t have.  The suggestion that inactive bits of the system may be making important contributions just seems to make it worse. It’s one thing for some neural activity to be subliminal or outside the zone of consciousness: it’s quite different for neurons that don’t fire to be contributing to experience. What’s the functional difference between neurons that don’t fire and those that don’t exist? Is it about the possibility that the existing ones could have fired? I don’t even want to think about the problems that raises.

I don’t like the idea of qualia space, another of Tononi’s concepts, either. As Dennett nearly said, what qualia space? To have an orderly space of this kind you must be able to reduce the phenomenon in question to a set of numerical variables which can be plotted along axes. Nobody can do this with qualia; nobody knows if it is even possible in principle. When Wundt and his successors set out to map the basic units of subjective experience, they failed to reach agreement, as Tononi mentions. As an aspiration qualia space might be reasonable, but you cannot just assume it’s OK, and doing so raises a fear that Tononi has unconsciously slipped from thinking about real qualia to thinking about sense-data or some other tractable proxy. People do that a lot, I’m afraid.

One implication of the theory which I don’t much like is the sliding scale of consciousness it provides. If the level of consciousness relates to the quantity of information integrated, then it is infinitely variable, from the extremely dim awareness of a photodiode up through flatworms to birds, humans and – why not – to hypothetical beings whose consciousness far exceeds our own. Without denying that consciousness can be clear or dim, I prefer to think that in certain important respects there are plateaux: that for moral purposes, in particular, enough is enough. A certain level of consciousness is necessary for the awareness of pain, but being twice as bright doesn’t make my feelings twice as great. I need a certain level of consciousness to be responsible for my own actions, but having a more massive brain doesn’t thereafter make me more responsible. Not, of course, that Tononi is saying that, exactly: but if super-brained aliens land one day and tell us that their massive information capacity means their interests take precedence over ours, I hope Tononi isn’t World President.

All that said, I ought to concede that in broad terms I think it’s quite likely Tononi is right: it probably is the integration of information that gives rise to consciousness. We just need more clarity about how – and about what that actually means.