Picture:  Christof Koch. Consciousness, however, does not require language. Nor does it require memory. Or perception of the world, or any action in the world, or emotions, or even attention. So say Christof Koch and Giulio Tononi.

(Koch and Tononi? Sounds an interesting collaboration – besides their own achievements both in the past have been co-authors of grand panjandrums of the field – Koch worked with Francis Crick and Tononi with Gerald Edelman. They have some new ideas here, however.)

I don’t think everyone would agree that consciousness can be stripped down quite as radically as Koch and Tononi suggest. I actually find it rather tricky to get a good intuitive grasp of exactly what it is that’s left when they’ve finished: it seems to be a sort of contentless core of subjectivity. Particular items on the list of exclusions also seem debatable. We know that some unfortunate people are unable to create new memories, for example, but even in their case, they retain knowledge of what’s going on long enough to have reasonable short-term conversations. Could consciousness exist at all if the contents of the mind slid away instantly? Or take action and perception; some would argue that consciousness requires embodiment in a dynamic world; some indeed would argue that consciousness is irreducibly social in character. Again we know that some unlucky people have been isolated from the world by the loss of their senses and their ability to move, without thereby losing consciousness: but these are all people who had perception and action to begin with. Actually Koch and Tononi allow for that point, saying that whether consciousness requires us to have had certain faculties at some stage is another question; they merely assert that ongoing present consciousness doesn’t require them.

Picture: iulio Tononi. The most surprising of their denials is the denial of attention – some people would come close to saying that consciousness was attention. If there were no perception, no memory , and no attention one begins to wonder how consciousness could have any contents: they couldn’t arrive via the senses; they couldn’t be called up from past experience; and they couldn’t even be generated by concentrating on oneself or the surrounding nullity.

However, let’s not quibble. Consciousness has many meanings and covers a number of related but separable processes. The philosophical convention is that he who propounds the thesis gets to dictate the definitions, so if Koch and Tononi want to discuss a particularly minimal form, they’re fully entitled to do so.

What, then, do they offer as an alternative essence of consciousness? In two words, integrated information. A conscious mind will require many – hugely many – possible states; but, they argue, this is quite different from the case of a digital camera. A 1 megapixel camera has lots of possible states (one for every frame of every possible movie, they suggest – not sure that’s strictly true since some frames will show identical pictures, not that that makes a real difference); but these states are made up of lots of photodiodes whose states are all fully independent, By contrast, states of the mind are indivisible; you can’t choose to see colours without shapes, or experience only the left side of your visual field.

This sounds like an argument that consciousness is not digital but irreducibly analogue, though Koch and Tononi don’t draw that conclusion explicitly, and I don’t think either of them plan to put away their computer simulations. We can, of course, draw a distinction between consciousness itself and the neural mechanisms that support it, so it could be that integrated, analogue conscious experience could be generated by processes which themselves are digital or at least digitizable.

At any rate, they call their theory the Integrated Information Theory (IIT); it suggests, they say, a way of assessing consciousness in a machine, a superior Turing Test (they seem to think the original has been overtaken by the performance of the chatterbot Alice – with due respect to Alice, that seems premature, though I have not personally tried out Alice’s Silver Edition). Their idea is that the candidate should be shown a picture and asked to provide a concise description; human beings will easily recognise, for example, that the picture shows an attempted robbery in a shop and many other details, whereas machines will struggle to provide even the most naive and physical descriptions of what is depicted. This would surely be an effective test, but why does integrated information provide the human facility at work here? It’s a key point about the theory that as well as elements of current experience being integrated into an indivisible whole, current experience is integrated with a whole range of background information: so the human mind can instantly bring into play a lot of knowledge the machine can’t access.

That’s all very well, but this concept of integration is beginning to look less like a simple mechanism and more like a magic trick. Koch and Tononi offer a measure of integrated information, which they call phi: it represents the reduction in uncertainty, expressed in bits, when a system enters a particular state. Interestingly, the idea is that high values of phi correspond with higher levels of consciousness on a continuous scale; so animals are somewhat less conscious than us on average; but it also must be possible to be more conscious than any human to date has ever been: in fact, there is no upper limit in theory to how conscious an entity might be. This is heady stuff which could easily lead on to theology if we’re not careful.

To illustrate this idea of the reduction of uncertainty, the authors compare our old friend the photodiode with a human being (suppose it to be a photodiode with only two states), When the lights go out, the photodiode eliminates the possibility of White, and achieves the certainty of Black. But for the human being, darkness eliminates a huge range of possibilities that you might have been seeing; a red screen, a blue one, a picture of the statue of Liberty, a picture of your child’s piano recital, and so on. So the value of phi, the reduction in uncertainty, for human beings is much vaster than that for the photodiode. Of course measuring phi is not going to be straightforward; Koch and Tononi say that for the moment it can only be done for very simple systems.
That’s fair enough, but one aspect of the argument seems problematic to me. The reduction of uncertainty on this argument seems to be, not vast, but infinite. When the lights went out in your competition with the photodiode, you could have been seeing a picture of one grain of sand; you could have been seeing a picture of two grains of sand, and so on for ever. So it seems the value of phi for all conscious states is infinite. Hang on, though; is even darkness that simple? Remembering that Koch and Tononi’s integration is not limited to current experience, the darkness might cause you to think of what it’s like when you’re about to fall asleep; an unexpected hiatus in the cinema eighteen months ago: a black cat in a coal cellar at midnight. In fact, it might seem like the darkness which enshrouds a single grain of sand; the indefinably bulkier darkness which hides two grains of sand…  So don’t both states involve an infinite amount of information, so that the value of phi for all states of consciousness is zero?

I think Koch and Tononi have succeeded in targeting some of the real fundamental problems of consciousness, and made a commendably bold and original attack on them. But they will need a very clear explanation of how their kind of integration works if IIT is really going to fly.

8 Comments

  1. 1. Michael Baggot says:

    This is another one of those definitions that actually says nothing and covers everything. No matter what comes after these guys can always say “I told you so!” All they are saying is that the contents of consciousness are linked across multiple processing units into some sort of integrated whole – shall we resurrect the old Gestalt insight here – and that after some minimal level of pixilate integration, voila!, we have incipient phenomenality. Give me a break, this is nothing more than emergence theory with an added layer of abstraction. It both explains everything and nothing because it neglects to first define the underlying computational architecture which I propose REQUIRES phenomenality in order to function. What we have here is simply a solution in search of a problem.

    Michael Baggot

    BTW, I would like to extend my very best wishes to your wife and yourself as she works through her illness. May your lives soon return to normal. And thank you for maintaining this most interesting and thoughtful blog.

  2. 2. Denise says:

    I don’t think consciouness requires knowledge that there is an outer connection to any recognizable object or entity. It simply IS.

  3. 3. Peter says:

    No, I agree it doesn’t require an external connection; and it may be it doesn’t require any content at all – but I find that a bit hard to get my mind round. I’m not sure what exactly differentiates a contentless consciousness from unconsciousness.

  4. 4. Denise says:

    Well…it’s a conscious that is not aware it is “conscious.”

    Once, I lived in a big loft with my bed in the middle. Babysitting a 3-month old baby, I had placed him in the center of my bed. As I walked around the bed, the baby’s eyes followed me with with uninterupted awarenes taking “out” “in” with absolutely no visable connection to “up” , “down” , “left” or “right.”

    To me consciousness is like what I saw in that baby: a serene, unafraid alien taking in it’s new environment for the first time, and finding “is” of interest.

  5. 5. Lloyd Rice says:

    It seems to me that Koch & Tononi’s “reduction of uncertainty” can be seen as a measure of success in pattern recognition (PR) operations. If we take PR as one of the fundamental operations of intelligence, then your example of remembering different possibilities upon viewing a black scene could be the result of recognizing past associations of a black visual scene. And, of course, there might well also have been sound and other sensations coming in, even though the visual field was black. So there would be plenty of other inputs for the PR system to work on. But I agree that any such measure as K&T propose might well take on VERY large values (or reduce the uncertainty from VERY large values) before a reasonable level of intelligence is found.

    It is clear that for a computer to build up K&T’s “integrated information” will require extended access to a wide variety of experiences over a long “computer lifetime”. And there are some assumptions here that are not typically associated with computer operation. It is assumed at least a) that the “appropriate” software is capable of integrating a broad range of experiences into a single comprehensive world view, b) that the software either runs continuously or that all memory contents can be saved and restored as needed — and we’re talking about a LOT of memory distributed throughout a large network of processors, and c) that the computer is able to — and is allowed to — roam about, experiencing the world in whatever ways it is expected to learn about.

    Babies have it easy. They just do it. We tend to think that a computer will somehow be able to compress this vast range of experience into a few data analysis runs or maybe by watching a few movies or surfing the Internet. Of course, since such computer experiences could, at least in principle, be shared among computers, the whole range of experiences need not be repeated over and over by every computer in the way people have to do it. I suppose as the memory formats are worked out and common data structures are established among researchers, this will eventually be done. But such sharing has not yet been done to any significant degree.

    Basically, though, I agree with much that K&T have to say about consciousness.

    Denise, your comments about eye contact led me to ponder the possibility that consciousness might be directly associated with eye control. In agreement with your observations, it seems to me that this is one of the primary reasons we want to believe that animals have consciousness. In one discussion, I proposed that the mechanism for eye control was, in fact, the core of consciousness. My friend immediately pointed out that as far as we sighted people can tell, blind people have the same sorts of conscious experiences that we do; that other senses fill in much of what is visually absent. And yet, we have the sense that such people are just as conscious as we are. Also, we can close our eyes and remain conscious. Thus consciousness may be closely related to eye control, but it is clearly more than that.

  6. 6. Denise says:

    I understand what you’re saying, Loyd. I think of Helen Keller, who was both blind and deaf and was, certainly, a conscious enity. If seeing, hearing or even feeling discernable physical sensation is not part of consciousness can someone who is brain dead have consciousness? Can a tree?

  7. 7. Lloyd Rice says:

    Denise, that’s the “hard problem”, of course. Until we understand enough about the brain to look in and see what’s going on, we can only speculate about the consciousness of other beings. Somebody put it nicely in a way that Peter has echoed somewhere in these pages (more or less): “Maybe only you and I are conscious and I’m not sure about you.” Not knowing, we must then rely on (hopefully) neurologically based speculation. I have stated my opinions, and my reasons for those beliefs, in comments on the page “Alien Consciousness”. I look forward to knowing more about the brain, tho I fear it may not happen within my lifetime. I believe that computational consciousness will be achieved, but my guess is that it will take another 20 years.

  8. 8. Denise says:

    Loyd,

    I looked at your comments on the page “Alien Conciousness.” To me, my conciousness and most of my alien friends, you are taking the possibilities of inter-galactic and inter-dimentional communication too literally and humancretely a perspectuve.

    Think of a store with a 2-sided sign in the window. One (binary)side reads “Open;” the other (binary) side reads “Closed.” But, sometimes, when the “Closed” sign is up you can knock on the window and ask the store owner or employee if, please, can you be let in to get some needed milk. Depending on how you ask, what you look like (safe or dangerous), or the store tender’s mood, you might be let in.

    And, sometimes, when the “Open” sign is up, the door is locked beacuse the store’s tender is on an errand or taking a pee. They just forgot to change the sign, or didn’t think it was important or care. You can’t always believe what you binarily see.

Leave a Reply