Phi

Picture: Phi. I was wondering recently what we could do with all the new computing power which is becoming available.  One answer might be calculating phi, effectively a measure of consciousness, which was very kindly drawn to my attention by Christof Koch. Phi is actually a time- and state-dependent measure of integrated information developed by Giulio Tononi in support of the Integrated Information Theory (IIT) of consciousness which he and Koch have championed.  Some readable expositions of the theory are here and here with the manifesto here and a formal paper presenting phi here. Koch says the theory is the most exciting conceptual development he’s seen in “the inchoate science of consciousness”, and I can certainly see why.

The basic premise of the theory is simply that consciousness is constituted by integrated information. It stems from the phenomenological observations that there are vast numbers of possible conscious states, and that each of them appears to unify or integrate a very large number of items of information. What really lifts the theory above the level of most others in this area is the detailed mathematical under-pinning, which means phi is not a vague concept but a clear and possibly even a practically useful indicator.

One implication of the theory is that consciousness lies on a continuum: rather than being an on-or-off matter, it comes in degrees. The idea that lower levels of consciousness may occur when we are half-awake, or in dogs or other animals, is plausible and appealing. Perhaps a little less intuitive is the implication that there must be in theory be higher states of consciousness than any existing human being could ever have attained. I don’t think this means states of greater intelligence or enlightenment, necessarily; it’s more  a matter of being more awake than awake, an idea which (naturally enough, I suppose) is difficult to get one’s head around, but has a tantalising appeal.

Equally, the theory implies that some minimal level of consciousness goes a long way down to systems with only a small quantity of integrated information. As Koch points out, this looks like a variety of panpsychism or panexperientialism, though I think the most natural interpretation is that real consciousness probably does not extend all that far beyond observably animate entities.

One congenial aspect of the theory for me is that it puts causal relations at the centre of things: while a system with complex causal interactions may generate a high value of phi, a ‘replay’ of its surface dynamics would not. This seems to capture in a clearer form the hand-waving intuitive point I was making recently in discussion of Mark Muhlestein’s ideas.  Unfortunately calculation of Phi for the human brain remains beyond reach at the moment due to the unmanageable levels of complexity involved;  this is disappointing, but in a way it’s only what you would expect. Nevertheless, there is, unusually in this field, some hope of empirical corroboration.

I think I’m convinced that phi measures something interesting and highly relevant to consciousness; perhaps it remains to be finally established that what it measures is consciousness itself, rather than some closely associated phenomenon, some necessary but not sufficient condition. Your view about this, pending further evidence, may be determined by how far you think phenomenal experience can be identified with information. Is consciousness in the end what information – integrated information – just feels like from the inside? Could this be the final answer to the insoluble question of qualia? The idea doesn’t strike me with the ‘aha!’ feeling of the blinding insight, but (and this is pretty good going in this field) it doesn’t seem obviously wrong either.  It seems the right kind of answer, the kind that could be correct.

Could it?

27 thoughts on “Phi

  1. My immediate thoughts in regard to such an information based theory is that it seems more like a theory of perception than a theory of “consciousness”. In the case of somnambulism (sleepwalking), would we not say that the person’s brain is integrating information in regards to comportment, but that they are unconscious? It seems then that the brain could integrate information in a nonconscious fashion. Accordingly, if we can make a distinction between information pickup and consciousness, then consciousness must be a different phenomenon altogether than the perception of stimulus information, something common to all organisms.

  2. Agreed that a sleepwalker is (seemingly) unconscious, or at least is in a dream-like conscious state and it seems clear enough that the dreamwalker’s perceptions are working just fine, if not consciously so. But that alone does not tell us how the Phi value for that state would compare to the Phi value for either dreaming or “normal” consciousness.

  3. Pingback: Tweets that mention Conscious Entities » Blog Archive » Phi -- Topsy.com

  4. It would be interesting to compare the states, and if there is memory impairment during said states(By asking an individual whether they were conscious, one may be testing for memory rather than consciousness). It is known that individuals can be in a perfectly conscious awake state and yet have no recall at all about such moments(even many hours of lucid talk, under some drugs, can vanish completely from recollection. As far as the individual knows he wasn’t conscious during that time.).

  5. “…some necessary but not sufficient condition…”

    After a first quick reading of the papers this is the impression I have.

    in p.219 of the manifesto there is something that has drawn my attention, and I I think it is in the core of the whole argument.

    It is stated: “…surgery has created two separate consciousness instead of one…” .

    for one observer? is there one observer (patient) for each visual field? one individual with two consciosness? can the patient (main consciousness) focus in the attention on one of the visual fields? can each visual field stimulate different emotions in the same observer?

    The implications of the statement are very deep.

    I think this statement should be carefully reconsidered. In any case this could means tha the integrated experience is not enough for the visual qualia subjective experience, which up to some extent defeats the purpose of the paper.

    and “…red square cannot be decomposed into the experience of the red and the experience of the square…”, I see it as the superposition of both. How does the brain compare a red square and a blue square, or a red square and red triangle. Decomposing into shape and color.

  6. I do not understand why they are attaching the label “consciousness” to information integration. A mathematical integration is a function that combines variables to give an output in response to an input. End of story

    This approach may be useful in neuroscience and may provide some clues about consciousness but the authors have not established any identity between phi and consciousness.

  7. I’m not sure, to answer the question at the very end of your post, how this could be any more of an inroad on the “hard problem” than any other proposal that’s been offered so far. Why should information have a feeling from the “inside”? How could it? Nevertheless, and as always, this is interesting stuff…

  8. “…One implication of the theory is that consciousness lies on a continuum: rather than being an on-or-off matter, it comes in degrees. The idea that lower levels of consciousness may occur when we are half-awake, or in dogs or other animals, is plausible and appealing…”

    Could it be that “consciousness” is and on-or-off matter, and what we could have are different degrees of “awareness”? including self-awareness. Does that make any sense?

    Like watching an image that ranges from very blurred to very neat, but having the image itself is on/off kind of case.

    So consciousness is different quite fundamental and independent of the complexity of repeceptor (number of possible related states) and then integrated information theory deals with levels of awareness (and perception, as said in previous comments).

    Being a non native english speaker makes difficult the use of consciousness/awareness.

  9. “…consciousness lies on a continuum…”

    Is it Phi a continous function in its domain? it seems it produces a discrete spectrum result due to the discrete nature of the variables of the sistem.So modifing the nodes and causal relations within one single system, produces a discrete set of Phi values.

    Would that mean that awareness degrees lie within discrete set of Phi values, even if they are very close between them, like a molecular spectrum.

  10. Vicente: yes, on-off consciousness with a continuum of awareness makes sense to me (Though I’m not quite sure how intuitively appealing it is).

    Would that mean that awareness degrees lie within the discrete set of Phi value…

    That’s very interesting (unfortunately my maths is not good enough to allow me to say anything really useful about it).

  11. As we read these posts, does our own Phi value vary?

    Is Phi analagous to photographic “depth of field” varaition?

  12. Peter: “Vicente: yes, on-off consciousness with a continuum of awareness makes sense to me (Though I’m not quite sure how intuitively appealing it is).”

    Vic P: “Is Phi analogous to photographic “depth of field” variation?”

    Sounds like you are both looking at the contents of conscious experience … now how is that done?

  13. Vic P:… As we read these posts, does our own Phi value vary?…

    I like very much the question. Well, If I have understood correctly the theory, Phi in this case is a gauge for the capacity of your brain to produce integrated information outputs. So any activity that “enhances” your brain” eg: creating new synaptical connections that allow more causal relations and more possible states in your brain, should increase your Phi. And the other way round. Does Phi vary in short periods of time, probably very little in terms of incremental change. In a baby probably faster than in an adult.

    That is one of the reasons for which I don’t think Phi is related to consciousness but to other secondary mind features like awareness or even better “understanding”.

  14. Vic P question has made me realize the IIT-Phi does not take into account TIME.

    Given a network architecture Phi accounts for the capacity to produce integrated information, but it says nothing about how fast this process is performed, how long does it take to change from one state to another (is a part of system “dynamics”). Time is very relevant for awareness and conscious experience and being experience.

    Let’s take vision. The time an image remains in the retina before refreshment has a great influence in the visual experience. If our brain could process many more images per time unit we would probably see in slow motion, and the other way round if visual cortex were slower we would miss a lot of visual information.

    And the theory does not consider the speed of change of the environment either (input information).

    Although different (no consciousness behind) in processors you have to consider the architecture and the clock frequency, and the data buses speed, to see the real performance in FLOPS. That means time.

    No, unless time is somehow incorporated to the theory I wouldn’t even consider it as a candidate to explain anything related to conscious experience as a whole. Maybe a sophisticated IQ test.

  15. Is the Phi value unchanged as you sit still; as you meditate; as you sleep?
    Or does it perhaps decay during such moments? Vicente’s observation would suggest that the math says it is unchanged. But can that be?

    Also, Vicente: I do not believe we would “see in slow motion”. My belief is that the perceptual processing “fixes” everything, adjusting to common bases for distance, time, etc. I believe, for example, that an insect sees a single, complete world, not vastly unlike what we see, and NOT the replicated, small images that some would have us believe.

  16. So am I saying I think an insect has consciousness? Hmmm. I have said elsewhere that I think it does not. Actually, I was just talking about how I thought the insect eye works. But consciousness vs. awareness .. is what several of the comments above have talked about, isn’t it?

    Peter: I just realized the comments are not numbered. I think that’s not a good thing.

    (Lloyd – I’ve put in numbering now. – Peter)

  17. First, to make my comment fair, the author makes reference to time implications in the theory (like transient time needed to turn phi on) and later says that forthcoming papers will present techniques to find natural spatiotemporal time scale. I meant that conscious experience somehow flows, and that requires that time is fully considered to have a proper close theory.

    Lloyd, yes you are right, slow motion was just a way of talking. But let’s say our nervous system and musculo-skeletal capacities and metabolism where much powerful, faster overall. Probably our experience would very different. For example, let’s say somebody throws a stone to you: if you can process few images per second you might see the stone launch and next the stone hiting your head. On the opposite, if you could see many images per second you could track the whole trayectory and if your legs are fast enough skip it. So as you said all factors, senses, brain, body has to be considered to create the experience.

    Now, the problem is how to relate the physical time and the subjective experienced time. One thing is a clock assigning a time to each position of the stone in its trayectory, and another thing is you perceiving the speed of the stone. This has made me said the incorrect analogy of the slow motion. I think there are really several time references and clocks involved.

    So time is very important in conscious experience, not for consciousness itself. Maybe, stationary conscious states are possible, but I cannot imagine them.

  18. “…consciousness vs. awareness …”

    Lloyd, may I ask you to consider this case: Two friends are watching a theatre play. One of them masters the play language, and the other one not at all. So the conscious experience in terms of images, sounds, etc, is very much alike for both. But for the one who understands the script the experience is quite different, much richer and meaningful. For me that is part of the difference between consciousness and awareness/understanding.

    I think Phi could account for that extra part. Probably Phi is higher in the guy who understands the script due to the “brain features” (connections, circuits, etc) that support that language knowledge.

    Another point is what “integrated” really stands for, since it is a pilar concept of the theory. Is it just coordination and synchronicity?

  19. “One implication of the theory is that consciousness lies on a continuum: rather than being an on-or-off matter, it comes in degrees. The idea that lower levels of consciousness may occur when we are half-awake, or in dogs or other animals, is plausible and appealing.”

    My main problem with this idea that animals are semi-conscious is the fact that evolution would rapidly favor those which are fully conscious. Surely, an alert dog is in an advantageous position when compared to a sleepy dog in terms of survival.

    One shouldn’t confuse intelligence with alertness. A wide-awake dunce is more alert than a sleepy Einstein and may have a better chance of survival in dodging an attacking animal.

    I don’t believe insects are any less ‘alert’ when compared to higher beings, assuming they are conscious at all in the first place.

  20. It seems to me that Phi as a continuous function implies the non-local and universal attribute. And to me that arises more from connectivity rather than from local computation and it goes to everybody, insects and humans. I think that the local one can be viewed as discrete and is generally considered at 20-30 ms (the time that one individual cannot distinguish two events apart or timing necessary in a sequence of images to appear continuous like in a movie).

  21. Vicente: Yes, I believe you have stated the same as I believe, that time/distance scales would be automatically compensated regardless of the available resolution. For example, a flying eagle scans the ground for prey. He probably sees a scene with about the same overall width that I would see it, but he sees objects between blades of grass that, in my view, are not resolved but remain blurry edges. There are also differences in the color spectrum. If his fourth cone type is within my color range, then I would guess that his view is not too much different than mine, except that he might see color distinctions between the colors I could discriminate. If his fourth cone type receptive frequencies lie outside of my spectral range, then he would see different things, such as I might see through a night-vision system. But again, those differences would be integrated into the same visual field. I would think that a pit viper snake would integrate the infra-red percepts in similar ways. It would not see the prey in a separate space from the visual field, but would simply “see” other aspects of it.

    As for the insect, I think that is a good case to make the distinction between awareness and consciousness. I do believe that the insect is, in some vital sense, “aware” of the visual image. But that does not mean it would necessarily be conscious of that image. I believe that consciousness does come in flavors, in the sense that it would be enhanced by the ability to take past experiences and future expectations into account. I am confident that animals differ greatly in these factors.

  22. When we drive down the road we may be conscious that the traffic lights are green or red, but in reality we are aware because we automatically press on the brake if the light is red. When we arrive at our destination we may be able to count and report how many red lights we stopped at. So we may say that what distinguishes awareness from consciousness is processing.

  23. “…I do believe that the insect is, in some vital sense, “aware” of the visual image. But that does not mean it would necessarily be conscious of that image…”

    If that is the case I have got it wrong for so long, I don’t if due to english and lenguage trap or because of my own thickness.

    For me consciousness is the phenomenological subjective experience of qualia, probably equal in basic terms for all sentient beings with similar senses.

    The problem to me, is that I require an observer/object model (workspace), and that eventually leads to the well known infinite chain observer/image that needs another observer and so on paradox.

    So the subject and object must somehow be the same, I haven’t got a clue how.

    Lloyd was mentioning the extended color vision of an eagle. Try to picture what qualia emerges in a shark brain that can feel electrical fields thanks to Lorenzini organ, or geese that can feel the Earth’s magnetic field using some (not very well understood mechanism) molecules with unpaired magnetic spin in its neurons. How do these animal with senses not familiar to us at all experience nature. Amazing.

    Then awareness/understanding for me has to do with the approach of the “observer” or “self?” to the conscious experience (if that makes any sense).

    For example: let’s have an eagle, myself (chinese iliterate) and a chinese person, looking at a chinese pictogram. The conscious experience is very much alike for the three observers. But the eagle is not aware at all that it is looking at a pictogram, it considers it as a stain, me I’m somehow aware it is not a stain it is a symbol, but I don’t know the meaning, and the chinese person has the higher level of awareness because he knows the meaning of the symbol too.

    This example can be extented to the overall being experience at all levels.

    Then alertness, for me has to do with attention. Eg: a deer is aware that some predators could be around, and then it is alert, so it focuses its attention to detect the presence of the predator. Meanwhile it cannot look for food, attention is “focused” in another priority.

    Shankar: are you sure that evolution will rapidly favor animals with higher lever of awareness. See for example a shark vs. a dolphin. Teeth vs. neurons, hmmm.

    Mind you, you can have alertness without counscious attention in automatic mode, like reactions. Some animals jump when the are expose to a sudden change in brightness, because they think a predator is attacking from above, evolutive trait probably.

    So going back to Phi. I think Phi could be a gauge for the capacity of a brain to achieve a certain level of “awareness/understanding”, but says nothing about “consciousness” that is decoupled precondition.

    I don’t know how to interprete that according to Phi awareness levels lie in a discrete set.

  24. I was not very clear in “awareness” vs “conscious of”. Certainly the insect can use the information in the image for survival activities, but it may not be “aware” in the sense of being conscious of the image. What that means to me is that I think the insect cannot contemplate the image, ponder upon how it relates to other facts in the creature’s life. The traffic light example is a good point. We can respond to the light and take the correct actions, although not really being aware of it in any sense, such as being able to remember it in detail. We may not even remember whether we stopped at that particular corner.

    For me, it’s definitely a graded scale with many dimensions.

    As I understand it, Phi is essentially impossible to compute. So what’s it good for?

  25. “…The traffic light example is a good point. We can respond to the light and take the correct actions, although not really being aware of it in any sense, such as being able to remember it in detail. We may not even remember whether we stopped at that particular corner…”

    I think it is a good example too. For me it has to do with attention not awareness. For the same reason after having a walk you don’t remember any of the steps you’ve taken, unless at one of them something has called your attention like hiting something with your foot. Thanks to this automation mechanisms we can live, an use our attention to better things. In some other cases we cause traffic accidents, for using the “Default Network” for active tasks that require attention. Buddhists have a good view on this topic.

    Maybe Phi could be computed approximately, applying statistical methods to its argument. You could avoid considering every single neuron and synapsis, and networks, you could aggregate, and average according to anatomical and histological data of different brain areas. Just to know roughly an order of magnitude for different systems is interesting. Maybe.

  26. Peter suggests using “all the new computing power” to compute Phi. So an interesting question is what is the relationship between an individual’s Phi value and that of an “average” individual of a species? All of this new computing power probably includes lumping multiple machines and that means that we’re still short of being able to compute an individual’s Phi. But we might be able to get a number for an “average human”. Would it make any sense to do that? Or does all the real interest lie in the individual values?

  27. Lloyd, very good point. I think that in statistical terms what makes sense is to have both: the individual as well as the species average values. I bet Phi falls in a normal distribution “bell curve”, like it happens with so many other physical and psycological parameters. With a big enough sample you could build Phi pecentile charts and so on.

    To classify a certain individual as tall or short you need to know the height percentile table for humans. With anatomical parameters it is evident what statistics produce, not so much with cognoscitive parameters.

    For me the very interesting point would be to cross-correlate Phi values with other intellectual and behavioural traits that can be measured. That could shed some light on what Phi really means.

    I think that is the only way to translate Phi to empirical or experimental terms, and try to deduce what is the concept or concepts behind it.

    As species it would be very interesting to compare average Phi in humans and other species, chimps for example, or even hardware.

    Before dreaming let’s see if anybody dares to try to somehow compute Phi or anything as close to it as possible.

Leave a Reply

Your email address will not be published. Required fields are marked *