Giulio Tononi’s Phi is an extraordinary book.  It’s heavy, and I mean that literally: presumably because of the high quality glossy paper, it is noticeably weighty in the hand; not one I’d want to hold up before my eyes for long without support, though perhaps my wrists have been weakened by habitual Kindle use.

That glossy paper is there for the vast number of sumptuous pictures with which the book is crammed; mainly great works of art, but also scientific scans and diagrams (towards the end a Pollock-style painting and a Golgi-Cox image of real neurons are amusingly juxtaposed: you really can barely tell which is which). What is going on with all this stuff?

My theory is that the book reflects a taste conditioned by internet use. The culture of the World Wide Web is quotational and referential: it favours links to good stuff and instant impact. In putting together a blog authors tend to gather striking bits and pieces rather the way a bower bird picks up brightly coloured objects to enhance its display ground, without worrying too much about coherence or context. (If we were pessimistic we might take this as a sign that our culture, like classical Greek culture before it, is moving away from the era of original thought into an age of encyclopedists; myself I’m not that gloomy -  I think that however frothy the Internet may get in places it’s all extra and mostly beneficial.) Anyway, that’s a bit what this book is like; a site decorated with tons of ‘good stuff’ trawled up from all over, and in that slightly uncomplimentary sense it’s very modern.

You may have guessed that I’m not sure I like this magpie approach.  The pictures are forced into a new context unrelated to what the original artist had in mind, one in which they jostle for space, and many are edited or changed, sometimes rather crudely (I know: I should talk, when it comes to crude editing of borrowed images – but there we are). The choice of image skews towards serious art (no cartoons here) and towards the erotic, scary, or grotesque. Poor Maddalena Svenuta gets tipped on her back, perhaps to emphasise the sexual overtones of the painting – although they are unignorable enough in the original orientation. This may seem to suggest a certain lack of respect for sources and certainly produces a rather indigestible result; but perhaps we ought to cut Tononi a bit of slack. The overflowing cornucopia of images seems to reflect his honest excitement and enthusiasm: he may, who knows, be pioneering a new form which we need time to get used to; and like an over-stuffed blog, the overwhelming gallimaufry is likely here and there to introduce any reader to new things worth knowing about. Besides the images the text itself is crammed with disguised quotes and allusions.  Prepare to be shocked: there is no index.

I’m late to the party here. Gary Williams has made some sharp-eyed observations on the implicit panpsychism of Tononi’s views;  Scott Bakker rather liked the book and the way some parts of Tononi’s theory chimed with his own Blind Brain theory (more on that another time, btw). Scott, however, raised a ‘quibble’ about sexism: I think he must have in mind this hair-raising sentence in the notes to Chapter 29:

At the end, Emily Dickinson saves the day with one of those pronouncements that show how poets (or women) have deeper intuition of what is real than scientists (or men) ever might: internal difference, where all the meanings are.

Ouch, indeed: but I don’t think this is meant to be Tononi speaking.

The book is arranged to resemble Dante’s Divine Comedy in a loose way: Galileo is presented as the main character, being led through dreamlike but enlightening encounters in three main Parts, which in this case present in turn, more or less, the facts about brain and mind – the evidence, the theory of Phi, and the implications. Galileo has a different guide in each Part: first someone who is more or less Francis Crick, then someone who is more or less Alan Turing, and finally for reasons I couldn’t really fathom, someone who is more or less Charles Darwin (a bit of an English selection, as the notes point out); typically each chapter involves an encounter with some notable personality in the midst of an illuminating experience or experiment; quite often, as Tononi frankly explains, one that probably did not feature in their real lives. Each chapter ends with notes that set out the source of images and quotations and give warnings about any alterations: the notes also criticise the chapter, its presentation, and the attitudes of the personalities involved, often accusing them of arrogance and taking a very negative view of the presumed author’s choices. I presume the note writer is, as it were, a sock puppet, and I suppose this provides an entertaining way for Tononi to voice the reservations he feels about the main text, backing up the dialogues within that text with a kind of one-sided meta-textual critique.

Dialogue is a long-established format for philosophy and has certain definite advantages: in particular it allows an author to set out different cases with full vigour without a lot of circumlocution and potential confusion. I think on the whole it works here, though I must admit some reservation about having Galileo put into the role of the naive explorer. I sort of revere Galileo as a man whose powers of observation and analysis were truly extraordinary, and personally I wouldn’t dare put words into his mouth, let alone thoughts into his head: I’d have preferred someone else: perhaps a fictional Lemuel Gulliver figure. It makes it worse that while other characters have their names lightly disguised (which I take to be in part a graceful acknowledgement that they are really figments of Tononi) Galileo is plain Galileo.

Why has Tononi produced such an unusual book? Isn’t there a danger that this will actually cause his Integrated Information Theory to be taken less seriously in some quarters? I think to Tononi the theory is both exciting and illuminating, with the widest of implications, and that’s what he wants to share with us. At times I’m afraid that enthusiasm and the torrent of one damn thing after another became wearing for me and made the book harder to read: but in general it cannot help but be engaging.

The theory, moreover, has a lot of good qualities. We’ve discussed it before: in essence Tononi suggests that consciousness arises where sufficient information is integrated. Even a small amount may yield a spark of awareness, but the more we integrate, the greater the level of consciousness. Integrated potential is as important as integrated activity: the fact that darkness is not blue and not loud and not sweet-tasting makes it, for us, a far more complex thing that it could ever be to an entity that lacked the capacity for those perceptions.  It’s this role of absent or subliminal qualities that make qualia seem so ineffable.

This makes more sense than some theories I’ve read but for me it’s still somewhat problematic. I’m unclear about the status of the ‘information’ we’re integrating and I don’t really understand what the integration amounts to, either. Tononi starts out with information in the unobjectionable sense defined by Shannon, but he seems to want it to do things that Shannon was clear it couldn’t. He talks about information having meaning when seen from the inside, but what’s this inside and how did it get there? He says that when a lot of information is aggregated it generates new information – hard to resist the idea that in the guise of ‘new information’ he is smuggling in a meaningfulness that Shannonian information simply doesn’t have.  The suggestion that inactive bits of the system may be making important contributions just seems to make it worse. It’s one thing for some neural activity to be subliminal or outside the zone of consciousness: it’s quite different for neurons that don’t fire to be contributing to experience. What’s the functional difference between neurons that don’t fire and those that don’t exist? Is it about the possibility that the existing ones could have fired? I don’t even want to think about the problems that raises.

I don’t like the idea of qualia space, another of Tononi’s concepts, either. As Dennett nearly said, what qualia space? To have an orderly space of this kind you must be able to reduce the phenomenon in question to a set of numerical variables which can be plotted along axes. Nobody can do this with qualia; nobody knows if it is even possible in principle. When Wundt and his successors set out to map the basic units of subjective experience, they failed to reach agreement, as Tononi mentions. As an aspiration qualia space might be reasonable, but you cannot just assume it’s OK, and doing so raises a fear that Tononi has unconsciously slipped from thinking about real qualia to thinking about sense-data or some other tractable proxy. People do that a lot, I’m afraid.

One implication of the theory which I don’t much like is the sliding scale of consciousness it provides. If the level of consciousness relates to the quantity of information integrated, then it is infinitely variable, from the extremely dim awareness of a photodiode up through flatworms to birds, humans and – why not – to hypothetical beings whose consciousness far exceeds our own. Without denying that consciousness can be clear or dim, I prefer to think that in certain important respects there are plateaux: that for moral purposes, in particular, enough is enough. A certain level of consciousness is necessary for the awareness of pain, but being twice as bright doesn’t make my feelings twice as great. I need a certain level of consciousness to be responsible for my own actions, but having a more massive brain doesn’t thereafter make me more responsible. Not, of course, that Tononi is saying that, exactly: but if super-brained aliens land one day and tell us that their massive information capacity means their interests take precedence over ours, I hope Tononi isn’t World President.

All that said, I ought to concede that in broad terms I think it’s quite likely Tononi is right: it probably is the integration of information that gives rise to consciousness. We just need more clarity about how – and about what that actually means.

43 Comments

  1. 1. Eric Thomson says:

    My hunch is there is quite a bit of information integrated during phototaxis in plants….To get over this promiscuity problem (in which even thermostats are a little bit conscious), he should just say that information integration is necessary, not sufficient, for consciousness to exist in a system.

  2. 2. Vicente says:

    Well, after reading Tononi’s original papers about Phi… a few concerns.

    1) In this context in order to have information we need to have consciousness. This is not like the artificial concept of information used in physics. So even in the information integration by the CNS (I’d rather say coordination)raises consciousness, we need a raw conscious substract to be there in the first place. It could be that both the substract and the contents (info), feedback each other in a close loop to reach the final conscious state.

    2) Information concept needs a better definition to be used in this context. These are not bits and bytes (Shannon just produced a model to facilitate telecomm engineering, there are just electrons drifting along the wire, information is only in the sender or receiver mind). I will not insist in the idea that even the electrons, and the wires etc etc are themselves concepts in our minds.

    So, information raises consciousness as much as there is already consciousness to handle information. Otherwise we need a threshold. Something like: at a certain point a brain reaches a phi threshold value that turns on consciousness in on/off fashion. If threshold value could be naught, and we take a physical approach to information, then we are close to panshycism and to Clifford’s “mind stuff” elements.

    3) The same phenomenological set up could provide very different levels of information depending on the “observer”. Could we say that consciousness is different for each case? I believe we have to distinguish between raw qualia, and cognitive qualia or epistemic qualia. The latter probably depend on memory, sensorial data and processing (info coordination-phi). The former I doubt it.

    I agree with Eric, a high phi is probably necessary but not sufficient.

  3. 3. DiscoveredJoys says:

    I’m not yet ready to be convinced that consciousness has anything to do directly with information. What if conscious is a quale (or possibly qualia or metaqualia) – the subjective experience of being aware and ascribing meaning to sensory impressions?

    Yes, the greater the volume of sensory information might increase the feeling of consciousness, but organisms/devices without awareness, or without ability to rank sensory impressions by ‘meaning’, could not experience consciousness. That’s a no when it comes to hurting the feelings of a thermostat, or a desktop PC, or bacteria.

  4. 4. scott bakker says:

    Peter: This is a far more even-handed review than my own! And you’re right, “What’s the inside and how did it get there?” is the million dollar question. I also think you’re right about Tononi’s enthusiasm blinding him to this and other glaring issues. The thing that *works* about Q-space, I would argue, is the handy way it circumvents ‘observer intuitions,’ and so provides a plausible way of understanding the apparent simplicity, ineffability, privacy and irreducibility of qualia.

    Eric: Definitely agreed. It would save him a lot of philosophical grief – or at least, grief from philosophers!

    Vicente: In his most recent papers Tononi assimilates information to causal systematicity. Otherwise I don’t quite see how consciousness has to be ‘preinformatic.’ The panpsychism stuff actually *isn’t* implied by his approach, though he seems to think it is. Like Eric says above, all he needs to do is qualify.

  5. 5. Arnold Trehub says:

    Scott: “In his most recent papers Tononi assimilates information to causal systematicity.”

    If information is simply “causal systematicity” then any proper physical mechanism exhibits information. The question of whether or not consciousness is ‘preinformatic’, it seems to me, is murky because the notion of information in this context is itself ill-defined.

    I have found it useful to think about information this way:

    Information is any property of any object, event, or situation that can be detected, classified, measured, or described in any way.

    1. The existence of information implies the existence of a complex?physical system consisting of (a) a source with some kind of structured?content (S), (b) a mechanism that systematically encodes the structure of? S, ( c ) a channel that selectively directs the encoding of S, (d) a ?mechanism that selectively receives and decodes the encoding of S.

    2. A distinction should be drawn between latent information and what?might be called kinetic information. All structured physical objects? contain latent information. This is as true for undetected distant? galaxies as it is for the magnetic pattern on a hard disc or the ink?marks on the page of a book. Without an effective encoder, channel, and? decoder, latent information never becomes kinetic information. Kinetic?in formation is important because it enables systematic responses with? respect to the source (S) or to what S signifies. None of this implies?consciousness.

    3. A distinction should be drawn between kinetic information and? manifest information. Manifest information is what is contained in our? phenomenal experience. It is conceivable that some state-of-the-art? photo—>digital translation system could output equivalent kinetic? information on reading English and Russian versions of War and Peace,? but a Russian printing of the book provides me no manifest information ?about the story, while an English version of the book allows me to? experience the story. The explanatory gap is in the causal connection? between kinetic information and manifest information.

    So in this scheme, *kinetic information* alone does not require consciousness. However, if we are talking about *manifest information*, then consciousness/subjectivity is, indeed, a precondition; i.e., consciousness has to be ‘preinformatic’.

  6. 6. Vicente says:

    Scott: Briefly, because Tononi does not even slightly tackle the hard problem of consciousness. His theory seems to show how to rank (if not discarded) conscious contents, why some of them seem to be more meaningful. I would call it a theory about intentionallity, or an explanation of why some minds can penetrate the environmnent more efficiently. But you already need the conscious substrate in place. Probably Phi applied to robots, or AI systems in general, would be a good measure of their AI level.

    Consciousness has to be preinformatic, because it is the main information enabler -sine qua non-.

    To me, what he says is: certain brain states that meet some conditions in terms of their “components” harmony… or coherence… synchronicity…etc can be candidates to “enter” the conscious space. I find it sound.

    For example, patients that have undertaken a callosotomy should have a drastic reduction in their consciousness, since you are preventing both hemispheres to exchange information, having a huge impact in their information integration, and subsequent PHI reduction. Well, that is not the case, what is observed is that their consciousness remains unaffected, but their conscious states are, as well as some cognitive capacities. Even more important, surprisingly, their feeling of self is not modified either. Is there anything more integrated than your feeling of yourself? the information integration that produces YOU.

    How is information coded in the brain? Let’s solve that first, and then we’ll see.

  7. 7. scott bakker says:

    Arnold: We debated the kinetic/manifest information divide a while back, and like I said back then, I don’t see what we gain by making the distinction (aside from servicing our (likely deceptive) semantic intuitions). Just consider you War and Peace metaphor in genetic terms. Does the fact that whale sperm cannot inseminate spider ova mean that we need to split our notion of information in two, or that the information contained in whale sperm is simply not *matched* (by virtue of having a different natural history) to spider ova?

    Vicente: I think you’re right about IITC not denting the Hard Problem – but like Peter mentions, Tononi’s enthusiasm seems to lead him to overlook a number of conceptual difficulties.

    You’re presuming a *semantic* conception of information, whereas Tononi (like myself) is presuming the non-semantic one that is presently revolutionizing our world. In a sense, you’re simply begging the question against him: He’s giving you the outline of an account that explains why certain information is taken up as informative: War and Peace written in English, when integrated into an English speaker’s cognitive systems, becomes meaningful. With a Russian speaker, it’s integrated as a brick of paper. Informatic effectiveness turns on ‘ecological matching.’

    If you make information *as it is experienced* your criterion for what counts as ‘actual’ information, then you need to explain what all that other ‘information’ that’s making this webpage among other things possible is, and why, despite so many appearances to the contrary, including things like cochlear implants and artificial retinas, it has no bearing on the brain or consciousness.

    We do need to nail down the code, as you say, but do you really think it *won’t* consist of systematic differences making systematic differences – which is to say, that it won’t be as mechanistic as everything else we know of in the life sciences?

  8. 8. Arnold Trehub says:

    Scott: “Arnold: We debated the kinetic/manifest information divide a while back, and like I said back then, I don’t see what we gain by making the distinction (aside from servicing our (likely deceptive) semantic intuitions).”

    Scott, would you claim that your own argument contra the distinction between kinetic and manifest information is NOT based on your own semantic intuitions?

    One very important thing that we gain by distinguishing between kinetic and manifest information is in the ability to label the two essential KINDS of information in a brain that is full of information. The vastly greater part of information in our brain is non-conscious and pre-conscious; this is the kinetic kind, shared with many types of artifacts. The other part of information in our brain is subjective/conscious; this is the manifest kind, shared with no known artifact.

    On another note, I enjoyed your account of the discussion between Al and Mat. What do you think Al would say about my working definition of consciousness?

    *Consciousness is a transparent brain representation of the world from a privileged egocentric perspective*

  9. 9. Charles Wolverton says:

    Arnold: No need to reinvent the wheel. All of your concepts correspond to well-established ones in standard comm/info theory:

    structured system – your description of this one is a bit vague, but what you mean is essentially an event space (AKA “system”) on which a probability distribution has been defined. In the simplest case, there are a finite number of states and hence a finite probability distribution: {p(i)}, i=1,…,n.

    latent information – “entropy” in the Shannon sense (see wiki, “entropy (information theory”)), a measure of the a priori uncertainty as to which of the n events will happen next.

    kinetic information – again, a bit vague, but what you’re describing is sending (by modulating some medium) a “channel symbol” identifying which of the n possible events occurred. Eg, in your “ink on paper” example the event space comprises the, say, unicode character set, the medium is the paper, and the “modulation” is printing a particular shape. The translation of an event into a channel symbol is called “channel encoding” (as opposed to “error-correcting encoding”, which is something else). But contrary to your description, correct decoding of channel symbols doesn’t enable “systematic responses” (at least as I interpret that phrase) since that would have to do with semantics. Think “one if by land, two if by sea”. Hoisting the appropriate number of lanterns conveyed one bit of channel information; what to do in response is a different matter, requiring a priori agreement between sender and receiver.

    manifest information – I’m not quite sure, but I think this corresponds to semantic information. The automatic “translator” (AKA “channel decoder”) identifies which of the 26 possible patterns is printed at a position on the page thereby resolving that a priori uncertainty. Resolution of the “explanatory gap” between channel symbols and semantics in simple cases like the lanterns is quite simple, essentially table look-up of a corresponding action. Resolving it in the case of “experiencing” a great novel is, of course, an entirely different matter.

    All: Although from a quick skim through “Phi” I don’t recall many – if any – equations, in other work Edelman and Tononi use “information” (and entropy) in the Shannon sense (as do Crick and Koch). Their “integrated information”, is (essentially, I think) a measure of the loss of entropy, relative to that of an event space of independent events, when the events are not independent (ie, are in some sense “integrated”). Eg, for a simple binary camera with 16 sensors firing independently and a uniform probability of firing, the entropy would be 4 bits. If the sensors were connected so that all either fire or don’t, the entropy would be 1 bit, resulting in an integrated information of 3 bits. (I think this is right; but if not, hopefully it nonetheless conveys the gist.)

  10. 10. Vicente says:

    Arnold:

    Could it be interesting to calculate PHI for the retinoid space and compare it with the value of PHI for another brain system that we know unconscious (eg: some area in the cerebellum). Or with the whole nervous system of a simple worm… of course using some simple models (logic gates). If it comes out and incredible high phi for the retinoid space architecture, it could mean something.

    Scott:

    It is either semantic information or nothing. The way in which physicists and engineers use the term information is just a conceptual tool to ease their work, and to device Human Machine Interfaces. There is nothing real behind it.

    Now, if you want to say that the Sun and the Earth interact with each other (gravitationally, thermodynamically, etc) because they exchange information (they communicate), it’s up to you, but in real terms it is meaningless. A pure harmonic transmits three data (info), frequency, amplitude and phase, TO YOUR MIND, otherwise is just a wave, and if eg: a leave uses it for photosynthesis, knows nothing about harmonics and info’s.

    Charles:

    I think that to bring in entropy is a good idea. Let’s go to a lower level. Life is a fantastic piece of stuff for a physicist, it takes the second principle to the extreme, but it doesn’t violate it. The second principle states that the production of entropy has to be positive, not that the increment of entropy has to be positive. So living creatures diminish their entropy at the expenses of the surroundings. Fantastic systems !! Probably if we were to study the brain phases space, we would see that it even diminishes its entropy more than the rest of the organism, since the states it reaches have lower probability to happen (thermodynamically, or in terms of order) than those of the… eg: stomach. We could call this information integration.

    Now we could state a hypothesis: Consciousness is related to low entropy systems. Well we knew that. As far as we know, consciousness only happens in living brains. In general, it seems that to introduce order in chaos has surprising effects… life, consciousness.., striking lottery in probability terms

    All these considerations seem to lead to necessary but not sufficient conditions, in what to consciousness concerns.

    Before talking about bits and symbols, and Shannon’s entropy. Shouldn’t we sort out how information is coded in the brain?

  11. 11. VicP says:

    Peter: I like the IITC because it ties in closely with the Heidegger “Readiness At Hand” which you also previously discussed.
    http://www.consciousentities.com/?p=452 Heidegger uses the Hammer example because it exemplifies the conscious to unconscious transitional ties between our brain and motor system. For early 20th century, a hammer was a ready piece of technology found in most modern homes. Had Heidegger written today perhaps his writing would have resembled your piece and the Readiness At Hand of the World Wide Web.

    Vincente, Scott: For callosotomy patients, the reading of War and Peace through only the left hemisphere or right hemisphere yields interesting results as well.

    Anyway let this bower bird take a final peck at coherence and context: If the self is located beneath the neocortex in the lower brain, we could say we begin as a fundamental quale of being into which the brain integrates its in-formation or forms information states primarily for motor skill function. Stated otherwise: the quale of being is a fixed material coherence which forms our own personal fixed being or “fixed time” to which other brain mechanisms i.e. Retinoidal Platform, afix themselves and impress information states.

    If being is a fundamental fixed property of vertebrate organisms, the mystery is this inner coherency which manifests between neural cells. The Zombie conjecture makes us imagine a being with the same exact neural firing patterns, muscle movements, phonetic generations etc.; and points us towards this mystery of “inner beingness” of neural activity.

    I don’t think we have to transport ourselves to the Island of Modalia to engage in conceivability arguments and miss the point.

    To summarize I quote Tom Nagel: “To summarize. The conjecture is essentially this: that even though no transparent and direct explanatory connection is possible between the physiological and the phenomenological, but only an empirically established extensional correlation, we may hope and ought to try as part of a scientific theory of mind to form a third conception that does directly entail both the mental and the physical, and through which their actual necessary connection with one another can therefore become transparent to us. Such a conception will have to be created; we wont just find it lying around. All the great reductive successes in the history of science have depended on theoretical concepts, not natural ones — concepts whose whole justification is that they permit us
    to replace brute correlations with reductive explanations. At present such a solution to the mindbody problem is literally unimaginable, but it may not be impossible.”

    I have the intuitive notion that neurons are unique because of their physical connectiveness which allow them to pass functional states between them and form superneuron clusters which leads to phenomenal conscious states.

  12. 12. Arnold Trehub says:

    @Vicente: In the moments before I awaken from a deep dreamless sleep, PHI in my brain’s retinoid space = 0 and, at the same time, in my preconscious cognitive mechanisms, which are active during sleep,
    PHI >> 0. At the moment I awaken from this deep sleep, PHI in retinoid space > 0, but much less than PHI in my preconscious cognitive mechanisms. So, according to the retinoid model of consciousness, the magnitude of PHI cannot be a decisive indicator of the presence or absence of consciousness.

    I agree with you that before we talk about bits and entropy and Shannon’s concept of information, we should sort out how information is coded in the brain. My own suggestions are give in *The Cognitive Brain* (MIT Press 1991). In this book I talk about synaptic transfer weights, filter-cells, class-cell tokens, leaky integrators, autaptic neurons, learning, recall, and imagery. Then the minimal models of the brain’s information processing mechanisms are tested in computer simulations, and their behavioral predictions compared to empirical findings.

    @Charles: I wasn’t proposing to replace standard communication/information theory. I was just describing a simple scheme that I found helpful in thinking about information in the context of cognitive brain mechanisms. On the question of manifest information, I disagree with your equating it with semantic information because in my theory of the cognitive brain semantic information can exist both as preconscious information (i.e., kinetic, in synaptic matrices) and conscious information (i.e., manifest, in retinoid space). For example, see Fig. 7 in *Space, self, and the theater of consciousness*.

  13. 13. scott bakker says:

    Arnold: “Scott, would you claim that your own argument contra the distinction between kinetic and manifest information is NOT based on your own semantic intuitions?”

    Actually, no, I wouldn’t! The ‘semantic perspective’ only seems all encompassing. What you’re asking is akin to the cardinals telling Galileo that since his intellect is of God, it can only be for God. People in the grips of various heuristics cannot help but universalize them (for a myriad of mundance psychological reasons, as well as (I think, anyway) a couple of deep neurological ones).

    So my question to you would be: Do you think that semantic cognition is a *universal* problem-solver, or just a particularly powerful heuristic? This is a tough one to answer.

    Arnold: “On another note, I enjoyed your account of the discussion between Al and Mat. What do you think Al would say about my working definition of consciousness?

    *Consciousness is a transparent brain representation of the world from a privileged egocentric perspective*”

    I’m glad you liked it! I’m working through a point by point comparison between my position and Dennett’s that I’ll be posting soon. I would appreciate any feedback I can get.

    Regarding retinoid theory, Al would say that you could be right on all the coarse-grained details of your definition, but that your reliance on intentional idioms is going to cause you problems (viz., put a cap on the effectiveness of your formulations). From my standpoint, I think BBT, which is a theory of the *appearance* of consciousness, could help you rebut many of the kinds of ‘in principle’ objections that you face.

  14. 14. scott bakker says:

    Vicente: “It is either semantic information or nothing. The way in which physicists and engineers use the term information is just a conceptual tool to ease their work, and to device Human Machine Interfaces. There is nothing real behind it.

    Now, if you want to say that the Sun and the Earth interact with each other (gravitationally, thermodynamically, etc) because they exchange information (they communicate), it’s up to you, but in real terms it is meaningless. A pure harmonic transmits three data (info), frequency, amplitude and phase, TO YOUR MIND, otherwise is just a wave, and if eg: a leave uses it for photosynthesis, knows nothing about harmonics and info’s.”

    I would like to unpack what you mean by “nothing real behind it” here. Do you mean that nonsemantic information is nothing but a “useful fiction”? So what makes it so useful, then? So useful, in fact, that it underwrites the whole of technological civilization?

    Are you suggesting that nothing is “real” until it enters the purview of human meaning? Are you a full-blooded idealist, Vicente? If so, how do you respond to the standard reductios?

    But I actually think the question I just asked Arnold best captures the kinds of knotty commitments you *seem* to be taking on: Do you think that semantic cognition is a *universal* problem-solver, or just a particularly powerful heuristic?

  15. 15. scott bakker says:

    VicP: Nagel’s “third conception” is exactly what I *think* I’ve stumbled across with Blind Brain Theory. If you’re interested, check out: http://rsbakker.wordpress.com/2012/09/27/thinker-as-tinker/

    This post is about as reader friendly as I’ve been able to make it thus far. Lunacy, as they say, is simply a *chronic* inability to communicate one’s reasons!

  16. 16. Charles Wolverton says:

    Vicente: I’m using “entropy” in the sense Tonini and Edelman do – Shannon’s measure of a priori uncertainty of random variables – not in a sense relevant to thermodynamics.

    And my point wasn’t that we should “talk about bits and symbols, and Shannon’s entropy” but that others were doing so implicitly using non-standard , and therefore confusing, terminology. I actually prefer a more “system engineering” style approach like that of Davidson in his “Subjectivity, Intersubjectivity, Objectivity” essays . (Although neither he nor anyone else besides me would apply that label to his work.)

    Arnold: Understood, but the first few terms you define are based on descriptions that are almost exactly those of a standard (simple) comm system. The only reason the descriptions are only “almost” exact is because they are, as I said, somewhat “vague”. And the concepts for which you have defined new terms already have well-known names. In an area that is already replete with ill-defined and redundant terminology, it seems no contribution to add to the linguistic litter.

    All: I just skimmed the Tononi chapter that introduces Phi, and see a possible issue. Since Tononi chose to avoid the math, we have only his verbal description in terms of a camera to work with. I’m going to translate it into comm terms and put it in comm terms.

    So, consider two channels each with channel information rate 0.5 Mbs (megabit per sec). Through one send a digital representation of the character string “SO”, through the other a representation of the character string “NO”. Then consider sending a digital representation of the character string “SONO” through a single 1 Mbs channel. Tononi argues that since an Italian speaker can recognize the output of the single channel as the Italian word “SONO” but not the separate outputs of the two channels independently, there must be additional “information” associated with the “integrated” channel. But this seems to mix channel information and semantic information. The output of the 1 Mbs channel is recognizable as an Italian word only if there is prior agreement between sender and receiver that the output is to be interpreted as a word, and specifically as an Italian one. Simply adding an agreement that the outputs of the separate channels are to be combined in a prescribed way yields the same result. Ie, the additional “information” is due to adding semantics – what is to be done in response to the channel information – not to integrating the channels.

    Not to suggest that Tononi’s concept of Phi is not meaningful, just that his motivating example doesn’t seem to work. In Phi, Tononi seems to reverse the sign on the mathematical definition of “integrated information” given in “A Universe of Consciousness” where more system integration leads to less integration information. The latter seems to me more intuitive since in a distributed processing model (eg, the GWS idea) one would expect that coordinating the activities of the multiple processors would indeed result in a system entropy (AKA, integration information) lower than the sum of the entropies of the component processors operating independently. Which seems to agree with the idea that “consciousness” might require such coordination, and the more the better.

  17. 17. Vicente says:

    Scott,

    What I am definitely not is a full-blooded naive realist. Actually, in what to understanding the very nature of outer reality respects, I would be counter idealist. What is really idealistic is to think that our models of the world capture the essence of it. Maybe we should all revisit Kant’s noumena and phenomena. But things are getting even worse, your reductio are sort of evaporating, when we see that the observer plays a fundamental role in the experiment, still we can get it into the equations…

    Mind your delutions of understanding, it seems to be quite clear that we base all our modelling in metaphorical structures, basically this means that for you to understand what an electron is, you need to compare it with a pea, and that’s why Rutherford’s and Thomson’s atom models preceeded Bohr’s (counter-intuitive, just mathematical). Have a look at the results of Lakoff and Dehaene on semantic cognition and embodiment, and you’ll see your abstract thought limitations. This is why the results that complex mathematical models produce, Quantum Physics, Relativity, are mind crackers, because there is nothing in your kitchen you can use to ressemble them, you can’t create a metaphore, and you don’t trully understand them.

    When I say there is nothing real behind it, I mean that there is no information in the real outer stuff, not that there is no real outer stuff. We interact with the world and then, your mind creates a world for you, correlated up to a certain extent with the outer one (you survive every day!), but don’t confuse both worlds.

    So, in order to handle information, you need consciousness to produce it in the first place. Of course; the way in which this information is produced and processed, integrated? can enhance the level of consciousness achieved, and phi could be a measure of that.

    Arnold,

    If PHI >> in the preconscious mechanisms, how is it that they are preconscious? contradictory? PHI =0 in the retinoid space while deep sleep, because it’s switched off.

  18. 18. scott bakker says:

    Vicente: “When I say there is nothing real behind it, I mean that there is no information in the real outer stuff, not that there is no real outer stuff. We interact with the world and then, your mind creates a world for you, correlated up to a certain extent with the outer one (you survive every day!), but don’t confuse both worlds.”

    So there’s no such thing as systematic differences making systematic differences? That’s literally all I mean by information. But your answer actually begs the question. How about ‘objects,’ or ‘causality’? I know it seems as commonsense as can be, but consciousness is so difficult that it stands to reason that some powerful intuition(s) is getting in the way.

    This model of knower and known is the ancient one, the traditional one. Everyone wants to put the one on the shoulders of the other – thus the long, fruitless, tug o’ war. But what if this ‘picture’ of the ‘relation’ is just another heuristic? What if it, like all heuristics, it instrumentalizes *informatic neglect* to derive computational efficiencies. What if, like all heuristics, it’s *matched* to specific problems?

    So to repeat my question: Do you think that semantic cognition is a *universal* problem-solver, or just a particularly powerful heuristic? The answer you seem to be given is that, Yes, it is universal. If you answer is indeed, Yes, my question would be: What kind of symptoms might we expect if you turned out to be wrong?

  19. 19. Vicente says:

    Scott:

    I am not very sure of what you mean, but I feel that it’s becoming tautological, in order to identify the differences that define the information, you need to produce the information that allows to spot the differences.

    What I am saying is that there is no knower and known, the known is somehow produced internally by the knower, based in part on some “input” whose very nature can be very much decoupled from the known.

    Objects? As many as subjects to subjectively perceive (actually create) them.

    Heuristics? well, sometimes, I don’t believe there is an absolute algorithm that always converge to some logical result.

    I don’t think semantic cognition is a problem solver, probably it is also a problem generator. Our minds suffer serious constraints to tackle many problems.

    The sympton is clear: SUFFERING.

  20. 20. Arnold Trehub says:

    @Scott: “So my question to you would be: Do you think that semantic cognition is a *universal* problem-solver, or just a particularly powerful heuristic?”

    I would have to be omniscient to flatly claim that “semantic cognition” is a universal problem solver. So I certainly wouldn’t make that claim. Whether or not semantic cognition is a powerful heuristic depends on the power of the particular brain mechanisms that instantiate semantic cognition. What brain mechanisms do you have in mind? The best we can do is propose a theory of cognition in sufficient biophysical detail to enable an empirical test of its implications. Any theory that cannot be evaluated empirically remains in the domain of faith, not science.

    In my own theoretical model of the cognitive brain, the output of semantic mechanisms set the *terms* of a problem to be solved, while other specified mechanisms engage in the solution of the problem. To find out more about the proposed neuronal mechanisms, see in *The Cognitive Brain* (1991), Ch. 6, “Building a Semantic Network”, Ch. 8, “Composing Behavior: Registers for Plans and Actions”, and Ch. 9, “The Formation and Resolution of Goals”. To my knowledge, no other theoretical model of the cognitive brain has been able to successfully explain as wide a range of cognitive phenomena as the model I have proposed. Furthermore, the retinoid model of consciousness has successfully predicted new kinds of conscious experiences — vivid conscious content never before experienced.

    Scott: “From my standpoint, I think BBT, which is a theory of the *appearance* of consciousness, could help you rebut many of the kinds of ‘in principle’ objections that you face.”

    Perhaps you are right. But what are the ‘in principle’ objections that I face?

  21. 21. Charles Wolverton says:

    Arnold:

    I’m trying to understand how you are thinking of PHI as related to retinoid space and “precognitive mechanisms”. So, here’s my take:

    Let’s simplify the processing to two binary arrays, a 16 element precognitive processing array and a four element retinoid array. The two arrays then have 16 bits and four bits of “intrinsic” information (AKA, entropy) respectively. If they operate independently, the system S comprising both arrays has entropy that is just the sum of the array entropies, viz, 20 bits, and the integration information is zero.

    Now suppose the two arrays are interconnected through a switch and that closing the switch reduces the number of states S can enter from 20 to, say, eight. Then the system entropy becomes eight bits and the integration information PHI becomes 12 bits.

    You seem to address three brain states: deep sleep, preawakening, awake. I assume that the awake brain state corresponds to closing the switch so that S is fully active and PHI is 24 bits. You say that for the preawakening state PHI = 0, from which I infer that it corresponds to the switch in my system being open. I can imagine arguing that in deep sleep the two arrays are essentially inert, in which case all three entropies are zero and PHI is as well. But then you say that in deep sleep “the precognitive array” has PHI >> 0 (don’t you mean S has PHI >> 0?), suggesting that at least one of the two processors is active. But wouldn’t it remain active in the preawakening state, in which case PHI would not be zero?

    So, my interpretations are inconsistent. Please straighten me out.

  22. 22. scott bakker says:

    Vicente: “What I am saying is that there is no knower and known, the known is somehow produced internally by the knower, based in part on some “input” whose very nature can be very much decoupled from the known.”

    I’m the one peddling the strange view, I understand. Part of the reason I frequent boards like this is to find ways to communicate what really amounts to a pretty radical reconceptualization of the problem.

    The above quote actually illustrates what I mean when I say semantic cognition is ‘heuristic.’ You say there is no ‘knower and known,’ and proceed to explain why in ‘knower/known’ terms. This is what I’m talking about: not the ‘real’ status of knower and known (which is what you’re trying to characterize), but the *cognitive reflex* that makes this move so intuitive.

    So you’re saying (or seem to be saying) that ‘known’ is somehow ‘internal’ (and thus ontologically subordinate) to the ‘knower,’ whose input processing idiosyncrasies ‘decouple’ the result (representational or what have you) from the real. This is a classic approach. The big problem, of course, is that if the known is internal to the knower, then the knower, as another thing known, would seem to be internal to itself. Adopting some form of representationalism is the classic way out: you divide the known and the knower into the known in itself versus the known for you, and the knower in itself and the knower for you.

    I think several centuries of perplexity should be enough to warrant assuming this is likely a dead end. It could be, as the prevailing assumption maintains, that the impasse is due to the lack of the requisite information. But as intuitive as this approach is, I think this is a longshot.

    Now I’m actually offering you a way out of this dead-end dichotomy, the ‘knower/known’ heuristic. There literally is no ‘knower’ and ‘known’ in my account, at least not in any sense that forces us to take them as the basic organizational principle framing our attempts to puzzle through consciousness.

    I’m simply asking you to consider the possibility that the reason you can’t see the exit I’m arguing is that you are assessing my claims according to reflexive intuitions *belonging* to the K/K heuristic. Thus the parallelism between what you are saying with reference to knower/known and semantic versus nonsemantic information: the only information *known,* you are saying, is semantic information, information that means something for us. There is no ‘information in-itself’ that we can appeal to.

    Of course, I can make all kinds of appeals, to the fact that your computer is presently functioning, despite the fact that the non-semantic informatic complexities that make it possible are not ‘for you’ in any ‘meaningful’ way. But you can turn around and, applying the K/K heuristic once again, and say, ‘Sure, but whatever it is that’s making this computer run doesn’t change the fact that you are restricted to *idealizations,* abstract interpretations what is the case in and of itself.’ And so we are off to the ancient and interminable races.

    Now as it stands, I’m simply *stipulating* the K/K heuristic. There is actually a very strong argument to be made that all human cognition is heuristic. I urge you to investigate the work being done in ecological rationality if you’re the least bit skeptical. At this point, all I’m asking you to consider is the WAY THE PROBLEMATIC CHANGES if we take the K/K heuristic as a given.

    Because I would argue that it opens up quite a bit of room, conceptually speaking, to rethink the consciousness problematic. Heuristics are adaptive artifacts, ways to efficiently solve specific problems. This is what makes *matching* so important, and how it is, as you say, that heuristics are so apt to generate problems when used ‘out of school.’

    As we speak, brain science is inexorably relegating the ‘feeling of willing’ to the trash heap of ‘cognitive illusions.’ What I’m saying is that there’s a *chance* that the same fate awaits all intentional concepts, that the meaning you take to be prior to everything could very well be an artifact of the severe informatic constraints faced by the conscious brain, constraints that force it to rely on, you guessed it, heuristics. For instance, when you consider the knower/known relation in informatic terms, the first thing you realize is the drastic amounts of missing information, the out and out cartoonishness of the relation compared to what science has discovered regarding the processes involved. No information regarding neural processing can be accessed. The ‘object’ is typically homogenous,singular, and the ‘subject’ is little more than an assumptive blur. This is the signature characteristic of heuristics: the way they systematically neglect information to get the job down.

    Now I fully admit that ‘information,’ as I’m using it, is also a heuristic. I personally think it’s one that happens to have a far broader scope of applicability than the K/K heuristic, but given the rather dismal track record of the latter when it comes to consciousness, I think suspending our commitments to the traditional way, even if only to *explore* the interpretative/explanatory possibilities of information non-semantically conceived is more than warranted.

    Sorry, Vicente. I didn’t expect to go on such a tear! Parents and their children, you know…

  23. 23. scott bakker says:

    Arnold: “Any theory that cannot be evaluated empirically remains in the domain of faith, not science.”

    Amen, brother! A-men! But then, as you know, research without theory is blind.

    “But what are the ‘in principle’ objections that I face?”

    We’ve been rubbing elbows on the web for awhile now, but I’m only guessing that most think your account doesn’t even dent the hard problem, that modelling neurofunctional correlates does not consciousness make. My own curiosity pertains (and I’m working from memory) to the way your *psychophysical* reasoning seems to turn on perceived structural homologues between retinoid and experiential space. Has anyone ever saddled you with any vehicle/content critiques – you know, ‘Hamming distance’ type problems?

  24. 24. Arnold Trehub says:

    @Scott: “… modelling neurofunctional correlates does not consciousness make.”

    I agree. That’s why I don’t model neurofunctional *correlates*. I model neuronal brain *mechanisms* that are competent to generate relevant *analogs* of conscious experience; i.e., these mechanisms generate neuronal representations that are similar to “what it is like” to have a particular kind of conscious experience. In my view this is the best that we can do within the norms of science to solve the so-called hard problem. Here is a recent exchange that I had with Stevan Harnad on this very issue:

    Stevan Harnad15 August 2012 15:48?

    PSYCHOPHYSICAL MODELS EXPLAIN DOINGS, NOT FEELINGS

    ??Arnold, I don’t doubt at all that your model explains some dynamic illusions, but you have to remember that illusions, just like “veridical” perceptions, consist of two things: doings and feelings.??In the Muller-Lyer illusion, for example, one of two equal-length lines looks longer, depending on whether the arrows at the end of the lines are facing in or out. Subjects can adjust the line lengths until they look of equal length, but then one of the lines will in reality be longer. A (perspectival!) model of psychophysical length recognition and judgment could, like your retinoid model, predict what combinations of inward and outward pointing arrows will produce what combination of length judgments. That’s important, and it is indeed a causal explanation, but it is a causal explanation of doings: recognition and length discrimination. It does not explain why any of it feels like anything.? The feelings are tightly correlated with the doings (so tightly, that one would have to believe in voodoo to imagine that anything but the brain causes both) and hence the model can predict the correlated feelings too; but that does not explain either why or how the brain causes those correlated feelings: just why and how the brain causes the doings with which they happen to be correlated. How and why those doings are felt doings remains unexplained (and sounds causally superfluous).???

    ???
    ?arnold Trehub16 August 2012 07:31?

    THERE IS NO *DUALITY* SEPARATING RETINOID DOING FROM FEELING, SO THE NOTION OF CORRELATION IN THIS CASE IS A NON SEQUITUR ??

    Stevan, the retinoid model of consciousness is not a psychophysical model. It is a detailed neuronal model of a system of brain mechanisms that regulate and CONSTITUTE feeling (conscious experience). ?In the SMTT experiment that I described above, the subjects are NOT recognizing or judging an external stimulus as in a standard psychophysical experiment; they are having a vivid feeling (conscious experience) of a triangle in motion when there IS NOTHING LIKE A TRIANGLE IN THEIR VISUAL FIELD. When they adjust the width of the felt triangle to match its varying height, there is NO external object on which to base their adjustment. It is entirely an INTERNALLY constructed feeling generated by the neuronal structure and dynamics of the retinoid system. If you were to look over the shoulder of the subject, you would have the same vivid feeling of a triangle in motion, but since your retinoid system is not a duplicate of the subject’s, you might not agree with his adjustment for height-width equality. The bottom line is that this experiment provides very strong evidence that the pattern of autaptic-cell activity in retinoid space IS our feeling/conscious experience. No correlation involved.??

    @Stevan

    You are right when you say “Explaining the Causal Role of Consciousness is Hard”. But what isn’t described can’t be explained. So until you give a description of what consciousness/feeling means for you, it is unlikely that any causal explanation of consciousness, no matter how well it satisfies scientific norms, will satisfy you.

  25. 25. VicP says:

    Scott: I tend towards that we say; our brain is not part of our bodies but we say the brain is part of our self, so there is a perceived separation between self and brain, or the self is a more fundamental state of being or fundamental state of brain consciousness. Most phenomenal states tends towards time thresholds so it would be logical that the threshold level is established by a more fundamental structure that contains its own time or being; the self. What we call conscious brain states are states which appear to go in and out of our being or self. States that exceed time thresholds.

  26. 26. scott bakker says:

    Arnold: A telling exchange to be true. And I *almost* entirely agree with you: I think you are genuinely explaining *something important about* specific instances of conscious experience. (Of course, you need to explain how ‘autaptic-cell activity in retinoid space’ ties into consciousness globally understood, and what structurally differentiates this particular set of neuronal mechanisms such that it generates conscious experience where other neuronal mechanisms do not).

    But you’re never going to convince someone like Harnad – ever, I daresay – until you find some way explaining (away) the dichotomy of ‘doings’ versus ‘feelings.’ I think this is going to be a deep problem facing neuroscientific researchers such as yourself, moving forward. I think you become so used to looking at the meat and the mind as two sides of the same informatic coin, that it begins to seem obvious that explaining experience is what you are doing. Not so for the majority of us, I’m afraid.

    So for Harnad, the million dollar question is one of finding some way of anchoring semantics in technics. To convince him, you need to show him how to get from one to the other–using only technics! Short of an explanatory idiom that is obviously common to both, you have a pretty much impossible task ahead of you.

    Think of my “Thinker as Tinker” dialogue: What do you think Al would say to Harnad?

  27. 27. scott bakker says:

    VicP: I’m theorist who has a very low opinion of theory, I’m afraid. I’ve read too much cognitive psychology to think that reasoning accomplishes much more than rationalizing our preexisting conceits outside of the methodological and institutional constraints of the natural sciences. The ‘self’ I fear, is one of those conceits that science is going to give a rough ride.

  28. 28. haig says:

    Phototaxis in plants and other systems are not as integrated as in nervous systems. Tononi uses the metaphor of a photodiode or a lattice of millions of distinct photodiodes causally distinct from each other to show what non-integrated information looks like. A thermostat would follow suit. The advancements made in Tononi’s IIT is not that it centers around simply information, but on the integration of that information by causally coupling distinct informational elements into larger connected wholes. That doesn’t get you to qualia space directly, but it does get you to what makes complex systems like nervous systems the necessary structures needed for consciousness.

    IIT increases from learning and culture as well, plasticity and developmental re-wiring, not just as genetic variation to brain-specific genotypes, so in a sense, babies or feral children or people with disabilities (blindness/deafness) with less complex brains will be less conscious. Babies have more neural connections than adults, which are culled rapidly upon experiencing the world in their formative stages. This culling is actually producing more complexity, a block of stone is less complex than a carved statue, and so interaction with environment, especially social environment, plays a huge role in increasing IIT. Algorithmic information theory (kolmogorov complexity) touches on this, but does not touch upon the integration, which is Tononi’s biggest contribution.

    And yes, this formulation is panpsychist. It seems to be that consciousness in the quale sense is fundamentally baked into the universe and that subjective experience, this ‘view from the inside’, increases as IIT increases, and the qualia space is influenced by the context of the specific history of the evolution of the complex-system exhibiting high Phi in a relational sense to its environment.

    Waxing more mystically, we can view the entire universe as a mind, not exactly as Idealists have thought before, but more Informationalist, and the leading edge of evolved complexity exhibiting the greatest Phi is the part of the universe most conscious at any given point. Christof Koch pays tribute to Pierre Teilhard de Chardin in his latest book as being prescient and surprisingly contemporary when seen from the lens of IIT.

  29. 29. haig says:

    And as for IIT vanquishing the hard problem, it really doesn’t. The hard problem is, in my opinion, in the same class of questions as ‘why is there something rather than nothing’, it may be unanswerable, it may even be the same question! We find ourselves in such a universe that is growing more conscious as it evolves. IIT aims to formalize what we mean by consciousness, but doesn’t say ‘why’ qualia exists in the first place, it is an aspect of our universe. “Why anything, why this?” as Parfit once wrote. Who knows, but don’t let that stand in our way of making concepts of consciousness more formal so that we can understand, improve, and eventually create more of it.

  30. 30. Vicente says:

    Scott: sorry I am finding #22 not easy to digest, could you please elaborate a little bit more on this K/K heuristic.

    BTW, I have always understood heuristic as opposed (or different) to straight forward algorithmic solution finding. Heuristic originally meant “to play”.

    Haig, Scott: Precisely, in this country all the roads lead to ontology, and yes Scott, probably as you said ‘known’ is somehow ‘internal’ (and thus ontologically subordinate) to the ‘knower,’ .

    Following the mystic vision: I am who I am. And now that Teilhard was mentioned, maybe we could say that entropy has as much to do as anthropy (his omega point = global maximum phi), Scott mind you, because in this rough ride to heaven science hides surprises for all camps. Penrose has a good analysis on the anthropic principle, Universal constants and consciousness. The recipe has turned out a bit too rich, and for the moment nothing pans out.

    Now, landing in the runway of common sense after this high, I have to admit that, Arnold, as usual, introduced the real science in #20:

    semantic cognition is a powerful heuristic depends on the power of the particular brain mechanisms that instantiate semantic cognition. What brain mechanisms do you have in mind?

    Who answers first?

  31. 31. scott bakker says:

    Vicente: Claiming that some kind of ‘omnipotence assumption’ informs consensus thinking on human cognition, Todd and Gigarenzer write, “as far as we can tell, humans and other animals have always relied on simple strategies or heuristics to solve adaptive problems, ignoring most information and eschewing much computation rather than aiming for as much as possible for both.” They and there research group then set out an demonstrate the way heuristic strategies can outcompete optimization strategies not only in metabolic cost versus effectiveness terms, but in accuracy as well, when it is appropriately matched to the kinds of problems it evolved to solve.

    I’m saying the K/K conceptual model, our propensity to think knowledge in terms of normative and intentional relations between knower and known), is heuristic rather than universal – and *obviously* so, when you pause and consider how much information the K/K model ignores.

    As a heuristic, it has a certain restricted scope of applicability: it will work tremendously for some problems, and dismally for others. What I’m saying is that after of millennia of confusion, the problem of consciousness pretty obviously lies outside the K/K model’s scope of effective applicability.

    I’m arguing that once you bracket the K/K model, you’ll see that consciousness could very well lie within *nonsemantic information’s* scope of applicability. But so long as you insist on the universality of the K/K model, novel reconceptualizations such as my own will probably sound more like noise than insight.

    You’re right about Arnold being the voice of hard scientific sense! But even he agrees that we have to have some prior (that is, theoretical) sense of what we’re looking for before we can get down to the empirical business of finding it.

  32. 32. Charles Wolverton says:

    haig -

    “This culling is actually producing more complexity”

    This phenomenon has confused me, but your stone-statue analogy helps. (The natural way to think of plasticity (at least for me, knowing nothing of the detailed mechanisms) was that new connections “grow” as learning progresses. I infer that instead some of the many, many existing connections are strengthened by learning while others at least weaken or break – perhaps some neurons even atropying. Is that a reasonably accurate high level picture of the process?

    Would you go so far as to infer from the close ties between complexity and “consciousness” that they are essentially one and the same? I’m inclined to do so, but I’d be interested in hearing a counterargument, although preferably one not founded on the quicksand of qualia and/or subjectivity.

    It just occurred to me that the swampman thought experiment (see wiki) could be viewed as a limiting case of how complex an artificial system can be and still not be “conscious”. Swampman-Davidson is precisely as complex as human-Davidson, yet I gather many – perhaps most – would say he isn’t “conscious”. (FWIW, I disagree.)

    “kolmogorov complexity) touches on this, but does not touch upon the integration, which is Tononi’s biggest contribution”

    It perhaps wasn’t obvious, but in comment 9 I was analogizing statistical dependence and subsystem integration, the latter involving physical connections among entities, the former statistical “connections” among RVs. Extending the comm analogy a bit, error correcting codes inject redundancy into the channel bit stream thereby creating intersymbol statistical dependence. This results in a reduced end-to-end (AKA “user”, as best I recall) information rate. One could call the difference between channel and user information rates the “integration information” rate – or even PHI.

  33. 33. Arnold Trehub says:

    Re convincing Harnad:

    Doesn’t this boil down to a question of Harnad’s intuition without empirical evidence vs. my intuition with supporting empirical evidence? Isn’t this like a replay of the flat-earth people vs. the globe-earth people, or the sun-around-the -earth people vs. the earth-around-the-sun people? In other words, isn’t this a matter of faith vs. science? History tells us that as relevant evidence accrues, faith grudgingly gives way to science.

  34. 34. scott bakker says:

    I actually think geocentrism provides a great analogue in the history of science, a way to understand what makes the ‘noocentrism’ of thinkers like Harnad so difficult to dislodge.

    As it so happens, we perceive ourselves moving whenever a large portion of our visual field moves – when we experience ‘vection,’ as psychologists call it. Short of this and vestibular effects, a sense of motionless is the cognitive default. As a result we stand still when the world stands still relative to us. So when our ancestors looked into the heavens and began charting the movement of celestial bodies, the possibility that they were also moving seemed preposterous. So, given the information available and our existing cognitive capacities, geocentrism was, hands down, the most intuitive thing to believe – and so long as we remained ‘earthbound’ the information required to overcome those intuitions was simply not available.

    With noocentrism you have the same basic structure: given the information available via introspection and intuition, the semantic character of consciousness remains far and away the ‘most intuitive’ thing to believe. As with geocentrism, our ‘brainbound’ perspective generates the illusion that consciousness is ‘out of play,’ immobile in some vague sense with reference to the natural world that seems to appear within it. In the same way that, as earthbound, we lacked the information required to believe that the earth moved with the other planets, we lack the information required to believe that ‘consciousness’ is simply another cog in the machinery of nature.

    “But I have the information here!” you complain. So the question is, why do we find it so difficult to leave our noocentrism behind compared to geocentrism? Well, geocentrism wasn’t that easy (there’s still believers!), and biocentrism, even a century and a half after Darwin, remains an open issue – but in each case we can chalk this up to cultural incompatibilities. So maybe noocentrism is following a similar arc.

    But there’s a wrinkle to noocentrism that makes it quite unique. Being informatically earthbound, it turns out, is relatively easy to overcome: all it took was Galileo’s ‘Dutch Spyglass’ to open the informatic gates. Being informatically epoch-bound is somewhat more difficult, which is perhaps why biocentrism remains, against all reason, an open debate. Being informatically *brainbound,* however, radically changes the stakes. No matter how much information you present to Harnad, he will be unable to ‘explain away’ semantic intuitions *simply because they seem to come first,* to be the very condition of comprehending information. Being wrong about one’s empirical sense of motionless is small potatoes: our environments are replete with examples pertaining to the illusion of motionless that underwrites geocentrism. Being wrong about one’s intrinsic sense of *meaningfulness,* however, is something I personally think few humans are even capable of thinking plausible, let alone fact. Everything anyone has ever experienced passes through this brainbound bottleneck, including the information you come bearing!

    In other words, it’s not simply a certain experience (apparent motionlessness) you have to overcome, but what appears to be the very *condition of any experience whatsoever.* And if you look closely, you’ll see a good number of the debates fit this very profile: Harnad’s, even Vicente’s argument that semantic information is the only intelligible information! It’s a hard, hard intuition to overcome.

  35. 35. Arnold Trehub says:

    Charles: “Would you go so far as to infer from the close ties between complexity and “consciousness” that they are essentially one and the same? I’m inclined to do so, but I’d be interested in hearing a counterargument, although preferably one not founded on the quicksand of qualia and/or subjectivity.”

    What evidence do you have that a highly complex Google server center is conscious? Why do you think that a counter argument hinging on subjectivity is founded on quicksand?

    I have proposed, as a working definition, that consciousness is a transparent brain representation of the world from a privileged egocentric perspective. This brain state constitutes subjectivity. Specifying the neuronal structure and dynamics of a putative brain mechanisms that instantiates subjectivity enables us to make predictions about the kinds of conscious experiences one should have under specified conditions. I have proposed a theoretical model of such a brain mechanism (the retinoid system) and tested its implications in a variety ways. The empirical findings support the validity of the retinoid model as a neuronal explanation for human conscious experience. Do you know of any other kind of existing mechanism that has exhibited evidence of consciousness?

  36. 36. Arnold Trehub says:

    Scott: “(Of course, you need to explain how ‘autaptic-cell activity in retinoid space’ ties into consciousness globally understood, and what structurally differentiates this particular set of neuronal mechanisms such that it generates conscious experience where other neuronal mechanisms do not).”

    1. What is the difference between “consciousness understood” and “consciousness globally understood”?

    2. The structure and dynamics of neuronal activity in retinoid space is distinguished from all other neuronal mechanisms by the capacity to represent the features of multiple objects and events, properly bound in spatio-temporal register within a coherent volumetric plenum having a fixed perspectival locus of spatio-temporal origin, a core self (I!).

  37. 37. scott bakker says:

    Arnold: I’ll definitely take a look. But, do you see what I’m on to about with why it’s so difficult to convince your interlocutors?

  38. 38. Arnold Trehub says:

    Scott, I do see what you are on to and I agree with your comments in #34. Stevan Harnad is a good example of the problem of what you call “noocentrism”. He is knowledgeable and thoughtful, yet even though he is a physical monist and accepts that consciousness/feelings must be caused by brain events, he insists that *feelings* simply CANNOT be *doings*. He is unable to understand that in a non-dualistic world feelings (conscious experience) MUST be physical doings, in particular the biophysical doings of a special kind of brain mechanism. I have proposed that the retinoid system is this unique brain mechanism, and I have demonstrated its predictive power.

  39. 39. PhiGuy says:

    Forgive me if i haven’t taken in all this commentary; I just wanted to respond to some points raised in the review:

    1. You write: “I’m unclear about the status of the ‘information’ we’re integrating and I don’t really understand what the integration amounts to, either. Tononi starts out with information in the unobjectionable sense defined by Shannon, but he seems to want it to do things that Shannon was clear it couldn’t. He talks about information having meaning when seen from the inside, but what’s this inside and how did it get there?”

    The information is causal information. The heart of Tononi’s theory is based on an objective mathematical analysis of causal systems which demonstrates the existence of information generated above and beyond the sum of a system’s parts. All causation generates AT LEAST one bit of information. (This is the photo-diode consciousness.) If the causal mechanisms are organized in a particular way, “yes or no” mechanism upon “yes or no” mechanism, the bits of information can grow, seemingly without limit. This information, being real, needs to be somehow “represented” or “instantiated” in the world; otherwise how could it be said to “exist.” That representation is experience. “Integration” is just another way of saying that consciousness is always necessarily unified; an experience is always one entity, it’s irreducible a priori. This is why he describes a digital camera as not generating experience; there is no causal system there which brings the information about each photo-diode together. As for Shannon, as Tononi makes clear in the book, the founder of information theory thought his analysis was purely mathematical. Tononi thinks that’s wrong. With consciousness as a (the?) fundamental property, mind itself can undergo informational processes, giving meaning and structure to experience. Shannon WON’T get you to the IIT, but Shannon+fundamental ontological presupposition will.

    2. Next: “I don’t like the idea of qualia space, another of Tononi’s concepts, either. As Dennett nearly said, what qualia space? To have an orderly space of this kind you must be able to reduce the phenomenon in question to a set of numerical variables which can be plotted along axes. Nobody can do this with qualia; nobody knows if it is even possible in principle.”

    Have you read this whopper of an article?: http://www.ploscompbiol.org/article/info:doi%2F10.1371%2Fjournal.pcbi.1000462
    It’s a little beyond my pay-grade but I understand the basic idea. Tononi does exactly what you want; he “reduces” (not really though, as a conscious experience is fundamentally irreducible) or “maps” consciousness to the non-metric geometry of information in a n-dimensional Hilbert-space, each mechanism splitting a particular symmetry, (like a game of 20-questions) and thereby changing the structure of an n-dimensional polytope in a particular way, “shaping” the quale. By looking at experience informationally, we can begin to map the ineffable aspects of mind with math.

    Finally, “One implication of the theory which I don’t much like is the sliding scale of consciousness it provides. If the level of consciousness relates to the quantity of information integrated, then it is infinitely variable, from the extremely dim awareness of a photodiode up through flatworms to birds, humans and – why not – to hypothetical beings whose consciousness far exceeds our own.”

    3. Why not? Everything else in the world admits of degree. Why not mind, the most fundamental of all phenomena. Wouldn’t it be a WEIRDER world if the whole of space and time was insentient until, suddenly, we show up on the scene. THAT is hubris. Just as we once thought we were the center of the universe and proven wrong, so too will our minds have to be removed from their privileged pedestal. And if minds a trillion times smarter than our own do wipe us out, we must at least CONSIDER the possibility that they know what they are doing, just as we know what we’re doing when we swat a fly. Hopefully though, our moral instincts are objectively correct and these intelligences would share in the knowledge that human life has reached a mental threshold of self-awareness which grants it the right to be protected.

  40. 40. Peter says:

    PhiGuy,

    Thanks.

    1. I don’t see how adding causation (itself not well understood at a metaphysical level)into the mix really helps. Yes, if we can add in any fundamental ontological presupposition of our choice we can get all sorts of results, but you know, we can’t. It might be that I’m missing the point completely, but I’ve given it an honest shot.

    2. No; no he doesn’t – as you say, ‘not really, though’. Maybe all that maths does something useful, but actual qualia are left on one side. Either he’s just assuming that qualia reduce to information, and hence begging the question, or has catastrophically misunderstood what these mad and intractable entities are supposed to be.

    3. No, I don’t think everything admits of degree, not in the final analysis. Maybe it will help to consider the quality of being alive. My view is you either are or are not alive. We can argue a bit over where the the border falls – are viruses alive? – but the difference between me and a blade of grass is not that I’m 80% more alive: we’re both living; and there’s no upward scale whereby things can go on being even more alive: twice as alive as me, forty times, or whatever; that makes no sense.

    In a similar way I’d say entities are either conscious or not. They may have more complex mental lives and quicker perceptions, but ultimately the issue is a yes or no one. Simple things are not a little bit conscious and complex ones possibly more conscious than me. That’s what my intuitions tell me, and I accept yours may tell you something else, conscious experience being what it is.

  41. 41. Michael Baggot says:

    It seems to me that Tononi has rather blithely omitted the rather well known lateral interconnection system of the retina itself which lies immediately behind the retina and organizes the input to the optic nerve. This system is responsible for the on/off surround relationships between the rods and cones. This system does not create experience rather it enhances resolution such that the experienced resolution of retinal images exceeds the simple pixelate density of the fovea itself. This enhancement is due to eyeball dithering/jitter. While this functionality probably does account for an informational aspect of experience; surely, it in no way accounts for the enisle nature of experience itself.

  42. 42. PhiGuy says:

    Peter -

    Good points. I think they are all answerable, though you might not accept the conclusions. That’s fine, it’s just clear that the points you bring forward are addressed by the theory, rightly or wrongly.

    1. How is the IIT a causal theory? Is this not just a metaphysical mosh-pit? Well, the answer here is straight-forward. According to the IIT, anything that exists in consciousness is informative. It’s a difference that makes a difference to the system. A complex integrates information until it reaches it’s highest value for phi and then, given what it knows, selects its next state. (For ease, I’m leaving out details here involving time. Actually, the system selects its best guess on what its past state was.) This selection is made based on the information available to the system, aka conscious information. So, consciousness is always causally effective to the system, but, given the system’s underlying uncertainty, does not wholly determine the selection. The system (in this case a brain) as a whole “enslaves” (Tononi’s phrase) it’s lower-hierarchical parts (the neurons in this case). So, consciousness is ALWAYS causally effective. Indeed, consciousness IS causation itself. Tononi doesn’t always make it explicit but the point is clear: ALL causation occurs in consciousness. Causation is consciousness “selecting” from alternatives available to itself. Make sense? It demands a certain metaphysical leap to get there, but, if the theory is right, it’s a leap necessary and profound.

    2. I would say he isn’t ASSUMING that qualia reduce to information; he is trying to prove it. (This is the photo-diode thought experiment.) Ultimately, yes, the hard problem is answered only by saying that consciousness is fundamental. But, a consciousness without information does not exist, as such. Information “structures” experience. At the same time, information, to exist, must be realized in consciousness; where else could it have ontological being? Consciousness is information from the inside. It’s the only thing that’s “really, real” and exists in and of itself. A moment of consciousness is a unitary, causal node, a monad that is defined entirely by the information it represents and selection it makes.

    3. Well, I think the theory is straightforward here. If by alive you mean conscious then, on some level, everything is alive. The only thing that exists in and of itself are conscious entities selecting from probability distributions. That’s it. Everything else in the world (like a rock) are discriminations our consciousness makes from chaotic data in order to navigate in a world where the true causal reality we can’t perceive directly. So, a mind COULD be forty times as alive as you, aka, have a phi value forty (fifty?, 1000?) times higher than yours. Consciousness does have a minimum value though: one bit. Below that and you might as well divide by zero. So, I think the sliding scale makes sense here, you just have to see the world from the point of view of ACTUAL causation (aka consciousness). (BTW, this why Hume’s problem of induction rankles; when we perceive billiard balls striking into each other we cannot see the logic gates and causal networks which actually are affected by the act; we only see the quick and dirty representation that our minds have generated for us. It’s like our perceptions are Microsoft Windows and the real world is the unseen CPU and Logic Board.)

  43. 43. JSG says:

    Eric (1) & Scott (4),

    I like the idea that considering integrated information necessary but not sufficient for consciousness gets one around the apparent absurdity of calling a diode conscious. But what then could be the missing ingredient within Tononi’s system that is required for consciousness?

    It seems to me that he states that integrated information IS consciousness, which is why he has no choice but to find consciousness in a diode. So, to my mind, calling II necessary but not sufficient does not eliminate the apparent absurdity of finding consciousness in a diode, since conscious diodes would seem to be an unavoidable part of Tononi’s system, and therefore, from his perspective, not absurd at all.

Leave a Reply