humoursWorse than wrong? A trenchant piece from Michael Graziano likens many theories of consciousness to the medieval theory of humours; in particular the view that laziness is due to a build up of phlegm. It’s not that the theory is wrong, he says – though it is – it’s that it doesn’t even explain anything.

To be fair I think the theory of the humours was a little more complex than that, and there is at least some kind of hand-waving explanatory connection between the heaviness of phlegm and slowness of response. According to Graziano such theories flatter our intuitions; they offer a vague analogy which feels metaphorically sort of right – but, on examination, no real mechanism. His general point is surely very sound; there are indeed too many theories about conscious experience that describe a reasonably plausible process without ever quite explaining how the process magically gives rise to actual feeling, to the ineffable phenomenology.

As an example, Graziano mentions a theory that neural oscillations are responsible for consciousness; I think he has in mind the view espoused by Francis Crick and others that oscillations at 40 hertz give rise to awareness. This idea was immensely popular at one time and people did talk about “40 hertz” as though it was a magic key. Of course it would have been legitimate to present this as an enigmatic empirical finding, but the claim seemed to be that it was an answer rather than an additional question. So far as I know Graziano is right to say that no-one ever offered a clear view as to why 40 hertz had this exceptional property, rather than 30 or 50, or for that matter why co-ordinated oscillation at any frequency should generate consciousness. It is sort of plausible that harmonising on a given frequency might make parts of the brain work together in some ways, and people sometimes took the view that synchronised firing might, for example, help explain the binding problem – the question of how inputs from different senses arriving at different times give rise to a smooth and flawlessly co-ordinated experience. Still, at best working in harmony might explain some features of experience: it’s hard to see how in itself it could provide any explanation of the origin or essential nature of consciousness. It just isn’t the right kind of thing.

As a second example Graziano boldly denounces theories based on integrated information. Yes, consciousness is certainly going to require the integration of a lot of information, but that seems to be a necessary, not a sufficient condition. Intuitively we sort of imagine a computer getting larger and more complex until, somehow, it wakes up. But why would integrating any amount of information suddenly change its inward nature? Graziano notes that some would say dim sparks of awareness are everywhere, so that linking them gives us progressively brighter arrays. That, however, is no explanation, just an even worse example of phlegm.

So how does Graziano explain consciousness? He concedes that he too has no brilliant resolution of the central mystery. He proposes instead a project which asks, not why we have subjective experience, but why we think we do: why we say we do with such conviction. The answer, he suggests, is in metacognition. (This idea will not be new to readers who are acquainted with Scott Bakker’s Blind Brain Theory.) The mind makes models of the world and models of itself, and it is these inaccurate models and the information we generate from them that makes us see something magic about experience. In the brief account here I’m not really sure Graziano succeeds in making this seem more clear-cut than the theories he denounces. I suppose the parallel existence of reality and a mental model of reality might plausibly give rise to an impression that there is something in our experience over and above simple knowledge of the world; but I’m left a little nervous about whether that isn’t another example of the kind of intuition-flattering the other theories provide.

This kind of metacognitive theory tends naturally to be a sceptical theory; our conviction that we have subjective experience proceeds from an error or a defective model, so the natural conclusion, on grounds of parsimony if no others, is that we are mistaken and there is really nothing special about our brain’s data processing after all.

That may be the natural conclusion, but in other respects it’s hard to accept. It’s easy to believe that we might be mistaken about what we’re experiencing, but can we doubt that we’re having an experience of some kind? We seem to run into quasi-Cartesian difficulties.

Be that as it may Graziano deserves a round of applause for his bold (but not bilious) denunciation of the phlegm.


  1. 1. VicP says:

    Honestly Peter this is more of a skills category problem. If Graziano is dealing with an area which is out of his skills expertise, then you can be baffled by a 40 HZ rate, but numbers are no more special than a 60 BPM heart rate. Stuart Hameroff, who is an anesthesiologist may be closer when he looks at microtubules which I think are acting to connect neurons in a special way at a metabolic level to form cellular states in the cortical areas. What those states are, is still a scientific problem, but at least they are going deeper into the problem then a 40 HZ number a layman observes.

    As an engineer, this is an engineering systems problem which involves understanding from the cellular “component” level up through the modular level etc. It also involves getting the overall system structures and integration correct. I have no problem for example understanding how any electronic device works from the substrate level up through the higher levels etc.

    As far as IIT, well “information” is just another way of expressing cell states which may come about via microtubules or whatever? No, a computer memory cannot become sentient just because they are all in particular or the same information states, but neurons probably do. Most philosophers get the problem of AI backwards, because computers were invented to mimic natural processors. It would be like thinking that the C++ compiler which sentient human programmers use to “talk” to the hardware level could suddenly talk back to us as a sentient being. Only in science fiction does that happen to scare our sentient emotions.

    Once again, IIT which is Tononi and Tegmark’s original theory is on the correct track, but when they use the analogy of a simple LED (light emitting diode), it flags me as another skills category error.

  2. 2. Arnold Trehub says:

    “The mind makes models of the world and models of itself .. ”

    Close, but a bit off the mark. It is the brain that makes models of the world and models of its supporting body. Notice that each person’s model of the world is constructed from a unique spatiotemporal perspective. This is the clue to subjectivity/consciousness. The human brain models a 3D world from a fixed locus of perspectival origin, which is our best candidate for our core self. Neuronal brain mechanisms organized on the basis of this principle successfully predict many previously unexplained conscious experiences.

  3. 3. Tom Clark says:

    Graziano is on the money in posing the question to IIT of why integrated information (II) should constitute consciousness. Why, after all, should II feel like anything? That said, one could pose a similar question to Graziano: why, on his metacognitive account, do we make the judgment (mistaken, he says) that we have *qualitative* states? What is it about metacognition that makes us suppose (and suppose wrongly, he says) that the world appears to us in terms of a multitude of qualities, many of which aren’t decomposable into further elements?

    If he’s right about our judgment being mistaken, then the world really doesn’t appear to us this way. It only seems to appear this way. But of course the seeming *is* the appearance. In which case, if Graziano can explain this seeming via metacognition, he’ll have explained experience, not eliminated it, as he seems to think he’s doing.

  4. 4. Sci says:

    Graziano seems to protest too much. How many articles will the Atlantic publish which come down to him simply whining?

  5. 5. Stephen says:

    A prof once told our class that if we couldn’t explain semiconductor hole theory to our grandmother, we didn’t really understand it. Well, no one has been able to explain consciousness to me yet. As Graziano says, all the current theories rely on magic. They all hopefully propose an element that is the one that delivers the magic, but everything proposed just seems to be a necessary, but not sufficient, component of consciousness. Even metacognition just seems to be another layer of abstraction laid upon the basic model. Why would consciousness come from that?

    One thing I like about Graziano’s approach is he proposes ideas that are testable and can add an element of knowledge to the overall problem. Dennett takes a similar view; that we will figure it all out once we solve the “easy problem” as we’ll find there is no “hard problem”. I think we still have a long way to go before anyone will come up with a credible theory of consciousness.

  6. 6. Jochen says:

    The mind makes models of the world and models of itself, and it is these inaccurate models and the information we generate from them that makes us see something magic about experience.

    Any theory that rests on ‘the mind making models’ seems to rest on uncertain foundations to me. Uncritically assuming that there is some ‘model-making’ facility smuggles in just that sort of capacity that is found to be questionable in the first place. Yes, we do speak of computers ‘modeling’ some salient aspects of the world, but this is really a shorthand for our interpretation of the data structures (or really, voltage patterns) the computer syntactically shuffles around as pertaining to that particular salient aspect of the world; there is nothing intrinsic to the computation that forces this interpretation upon us. (Note that I’m not arguing that one needs intentionality in order to ‘tie down’ a specific interpretation to a given computation—that would be question begging. All I’m saying is that the interpretation is left open by the computation on its own; there may be non-intentional ways to solve this problem, but if so, I know of no approach successfully demonstrating them.)

    Furthermore, if there is need for a model to cognize anything in the world, then the question is how the model—itself a part of the world—is cognized; certainly, introducing a model of that model won’t help. But if the system is somehow thought to have intrinsic knowledge of the model, then it apparently can come into intrinsic contact with some part of the world (the model). But then, why does there need to be a model in the first place?

    So I think that any account of the mysteries of consciousness in terms of some system producing an inaccurate model of the world really presupposes too much, and thus, fails to be explanatory.

    Re IIT, while I ultimately share the same reservations towards its explanatory capacity, I think the argument here is being sold short: the idea is that there is a part of the information stored in a system that is not reducible to the information stored in the system’s parts; this is intended to capture at least some puzzling phenomena of subjective experience, such as, e.g., their holistic nature, their resistance to reductive explanation (if you break the system up into parts, the phenomenon simply disappears), and possibly, their private and ineffable nature—it may not be possible to share this information with other systems, simply by virtue of them being separate systems from that which integrates this information.

    Of course, why this should lead to some sort of ‘what it’s like’-aspect doesn’t seem to be answered by this approach.

  7. 7. Arnold Trehub says:


    Despite any counter intuitions, if a theoretical model of how the brain models the world is able to successfully predict/explain relevant mental phenomena that were previously inexplicable, it is a significant advance in our understanding of consciousness. See, for example, the retinoid model.

  8. 8. Scott Bakker says:

    He’s been working his way toward Blind Brain Theory for a while now, I think, right down to using all my favourite metaphors! His own theory isn’t really a workable one, though ( ), and in his last couple pieces like these he seems to be backing away from it.

    I would urge him to embrace BBT, but then as Humphries likes to say, that would be tantamount to asking him to use my toothbrush! Besides, I don’t think theories cooked up by SF authors carry much water in neuroscientific circles 😉

  9. 9. VicP says:

    Like simpler biological stimulus response systems which only react to external “real time” stimuli, more advanced systems can build more advanced stimuli within that we call models.

    The whole time notion is another misnomer of an advanced biomechanical system. Graziano in earlier articles talked about earlier notions of color theory and how they were overturned or scientifically explained out of the manifest image.

    We really don’t perceive cosmological time without telescopes and mathematics or we need to translate our own biological time into other realms.

    Metacognition may be explainable because the brain is not one simple “computational’ process but several multi-threaded ones.

    BBT does make us look from within or see that the structures of the brain which we study from “without” really originate from within. Blind Brain is an excellent analogy because the neocortex is structured more like the eye. It is multilayered like the retina and emanates from the more primal brain structures via the thalamacortical loop much like the optic nerve. The more advanced structures are still presenting advanced models as stimuli to the primary drives and structures (source of 40HZ signals?). This would be a more proper model of intentionality.

  10. 10. Tom Clark says:


    “Re IIT, while I ultimately share the same reservations towards its explanatory capacity, I think the argument here is being sold short…”

    Yes, IIT really does try to grapple with the essential properties of experience, and it takes as axiomatic that it actually exists. From Oizumi et al. (URL below):

    “Existence: Consciousness exists – it is an undeniable aspect of reality. Paraphrasing Descartes, ‘I experience therefore I am’.”

    What’s also nice (since it captures something I think is crucial about consciousness) is that this existence – this reality – is taken to be essentially subjective, not something available to 3rd person observation:

    “An experience is thus an intrinsic property of a complex of mechanisms in a state. In other words, the maximally irreducible conceptual structure specified by a complex exists intrinsically (from its own intrinsic perspective), without the need for an external observer.”

    This means that conscious experience isn’t produced as a further physical effect that’s detectable from outside. It exists only for the subject. This of course stretches the notion of existence, which is ordinarily tied to something being intersubjectively available. Experience exists, but not intersubjectively like brains. This claim might of course might offend the sensibilities of some physicalists.

    But what about the qualitative nature of consciousness? For IIT, qualitative feels (qualia) are specified by the multi-dimensional structure of integrated information:

    “Figure 15. A quale: The maximally irreducible conceptual structure (MICS) generated by a complex. An experience is identical with the constellation of concepts specified by the mechanisms of the complex. The ?Max value of the complex corresponds to the quantity of the experience, the “shape” of the constellation of concepts in qualia space completely specifies the quality of a particular experience and distinguishes it from other experiences.”

    Ok, but this addresses the qualitative *specificity* of a quale, not why it should be qualitative in the first place. To explain that, I think we need to consider the logic of representation and why it is that complexly recursive representational systems might necessarily end up with cognitively impenetrable, basic, irreducible elements that therefore necessarily appear to the system, and therefore *exist* only for the system, as qualities (since that’s what it is to be a quality).

    Re the intentionality of consciousness, closely tied to representation, imo, they say:

    “IIT naturally recognizes that in the end most concepts owe their origin to the presence of regularities in the environment, to which they ultimately must refer, albeit only indirectly. This is because the mechanisms specifying the concepts have themselves been honed under selective pressure from the environment during evolution, development, and learning [65]–[67]. Nevertheless, at any given time, environmental input can only act as a background condition, helping to “select” which particular concepts within the MICS will be “on” or “off”, and their meaning will be defined entirely within the quale. Every waking experience should then be seen as an “awake dream” selected by the environment. And indeed, once the architecture of the brain has been built and refined, having an experience – with its full complement of intrinsic meaning – does not require the environment at all, as demonstrated every night by the dreams that occur when we are asleep and disconnected from the world.”

    All told, I think IIT as set forth in the Oizumi paper is a very productive stab at specifying and motivating a particular suite of mechanistic correlates of consciousness, even if it doesn’t completely close the explanatory gap. I wonder if Graziano has dealt carefully with it. Maybe not since he doesn’t buy the existence axiom.

  11. 11. Arnold Trehub says:

    “This means that conscious experience isn’t produced as a further physical effect that’s detectable from outside. It exists only for the subject.”

    Is this claim challenged by the fact that a self-induced hallucinatory experience in the SMTT experiment is shared by independent observers looking over the subject’s shoulder?

  12. 12. Stephen says:

    IIT simply asserts that consciousness comes from the traits of information. That’s why Graziano considers it just magic (or phlegm, in his current metaphor). It’s not just a problem of “explanatory capacity”, it’s a complete lack of explanation for their assertion. For me, it makes the theory just an interesting adjunct to the consciousness discussion.

    Any theory of consciousness will have to take into account the neurological basis of the brain, in particular, the way the brain represents itself and it’s environment using models or schema. It might be discovered that these schema don’t contribute to consciousness, but are just consciousness content. Or not.

    To venture into a little science fiction; might it be possible some day to determine the location of the schema for a particular quale? After all, someone identified the “Bill Clinton” neuron in a person’s brain. If the location is known it might then be determined what form the information takes in the brain. If that is known we might compare two people, to see if their quale representations, and thus possibly their experience of the quale, are equivalent. Or even further, to insert one person’s quale representation into another’s schema. Suddenly we get a glimpse into someone else’s mind. OK, not in my lifetime. However, what would someone on an 18th century sailing ship have thought of smart phones?

  13. 13. VicP says:


    We had a retirement dinner for a colleague a couple of years ago. Many of us didn’t know he had an identical twin so we were all greeting him and talking to him when we arrived at the hall. This is a case where identical quale induce identical meaning.

    The quale for suddenly seeing a lion and receiving a tax notice from the IRS are very different but may induce identical meaning concerning our physical well being vs our financial well being. The perceived structural of the network on a magic scientific instrument is no different than seeing the non-identical bit pattern for an external object induced in a MAC vs IBM vs Windows vs Linux etc. The important fact is the meaning induced by the external environment.

    Any scientific work by a legit scientist whether it is AST or IIT is noteworthy. I’m a fan of microcolumnar architecture:

    “The hypothesis requires that nerve cells in middle layers of the cortex, in which thalamic afferents terminate, should be joined by narrow vertical connections to cells in layers lying superficial and deep to them, so that all cells in the column are excited by incoming stimuli with only small latency differences. The columns form a series of repeating units across the horizontal extent of the cortex.”…Do these form Supercells which unify even further? Nature follows patterns.

  14. 14. John Davey says:

    “e proposes instead a project which asks, not why we have subjective experience, but why we think we do: why we say we do with such conviction. The answer, he suggests, is in metacognition.”

    So his answer to a lot of non-explanations of consciousness (or consciousness denials) is yet another non-explanation/denial.

    You gotta laugh.

  15. 15. Stephen says:


    My point about the science fiction qualia investigation was really to highlight how much knowledge we are lacking about the “easy problem” and how many of our current conclusions are tightly tied to our current technology. My belief is that there needs to be a lot more investigation of the mechanics of consciousness before a sensible theory of consciousness will be postulated.

    With qualia, it is generally accepted that everyone has their own and we cannot know about someone else’s, except by the indirect process of description. I think we could challenge that someday.

    I agree that the work done by all the legit researchers and the discussions that follow are worthwhile. Some is more worthwhile than others, though. The lack of a solid foundation for current consciousness theories is what Graziano continues to rail about. That’s why he has backed off calling his own Attention Schema theory a theory of consciousness. As far as I can tell, it’s a theory of consciousness content.

  16. 16. VicP says:


    Yes I agree and see your point.

    Truth is there are major areas of neural anatomy like glia cells and microcolumns that I pointed out, which they are not explicit about.

    Overall architecture and function as we know how the cardio and pulmonary systems integrate with the role of blood but neural architecture is just signaling between the various neural organs. The metaphysical gap is understanding the “blood” or “wine of consciousness” as Colin McGinn calls it.

    I think the flow of attention in the brain is our actual perception of time. Time perception is essential to any basic nervous system for movement.

  17. 17. John Davey says:

    I suggest that if Graziano wants an explanation as to why philosophers are getting nowhere with consciousness, he should look at other scholastic endeavours on which huge amounts of effort were entertained by the brightest and best minds of the era to no apparent avail.

    For instance Medieval thinkers of great renown discussed at length the treatment of the souls of ‘dog people’ (humans with the heads of dogs) . Were they mainly dogs or humans ? Did dogs have souls and if not how much human did they have to be combined with to have a soul ? What was the proportion of human to dog in dog people ? Should they be given a christian burial? Could they sin ?

    And on and on and on. The reasoning, and the techniques of argument were exactly the same as today. The problem was the starting point – there are no such things as dog people, and it was not until Europeans travelled more extensively that the assumption that dog people existed was allowed to die quietly.

    It’s the same with consciousness. There is a fixation that the brain is an “information processor” along the lines of a computer and that means that all mental phenomena, however uninformative, have to be analysed in terms of “information flows”, “representations”, “models” and the like.

    It’s exacerbated by the fixation with the idea that physics – which revolves around the idea of the syntactical-mathematical model – must be able to explain it all somehow.

    The most obvious thing for medieval thinkers should have been to question the assumption that “dog people” existed. There were no great writers of the era who had claimed to have seen them : few convincing witnesses. But there were too many eminent men who had written to the effect that “dog people” must exist, so the ridiculous conjecture carried on for decades.

    Likewise, in the current era, the most obvious thing about consciousness is to assume, first and foremost, that consciousness actually exists (something that a lot of people find dreadfully hard to so, despite the literal evidence if their eyes),that it is caused by brain matter and indeed it is the particular construction and material make up of brain matter that is responsible for consciousness. The latter point seems to throw people into a terrible mess : ‘platform independence’ seems a mantra which needs to be met under all circumstances, despite not one ounce of scientific reasoning being provided to demand it be so.

    After we accept these basic reasons we can abandon the attempt to explain consciousness from philosophical principles and leave it to the scientists. Perhaps of more interest than the ‘mystery’ of consciousness is the mystery of why the above common sense account creates such trauma in intellectual circles. It’s the contemporary equivalent of the ‘dog people’ problem.

  18. 18. VicP says:


    Physics does explain a lot but that’s provided you account for something more than particles and locations. They wave electrodes over brains and detect electrical activity but waving electrodes over the case of my iPhone also produces fields. So what if you can’t dig any deeper?

    The most obvious fact is that as you progress from particles to atoms to inorganic to organic is the complexity of the structures but more important the size and speed slows down. There are natural forces at the physical level at work in cells and neurons but all they do is wave their magic wands.

    They spook at pansychism but what’s wrong with saying everything in nature is an electric motor “panmotorism” since emf properties are present in every atom?
    Well, it’s obvious they understand structure shapes the forces of nature in a motor armature, but neurons are magic to them.

    The dog people argument is like the argument which says consciousness has no causal role.

    If you ever attend a consciousness conference, bring dog food.

  19. 19. Stephen says:


    Regarding time – It also occurred to me that time and attention were related. Graziano brings up the concept of “now” in his paper on attention schema. “Now” could be the window of content in the consciousness for each attention stream. The fuzziness of “now” might come from the different lengths of each stream. For example, reading a paragraph while listening to the sound of a truck backing up each have different time frames.


    While the philosophers come up with some odd ideas, they are also an inventive lot capable of seeing concepts and relationships that a neuroscientist might miss. Whatever they come up with, though, must pass the test of compatibility with the neurology of the brain. I agree that they get off track when they focus on zombies or Mary, neither of which are real or even possible.

    I am a bit confused about your position, though. If you toss out the philosophers and metaphysics and also toss out physics, brain schema and information flows (i.e. most of what we currently know about neuroscience), what’s the investigative path?

  20. 20. Arnold Trehub says:


    Regarding time, the short-term memory properties of autaptic neurons in retinoid space enable the temporal binding of successive stimuli in our extended present. So we are able to experience tunes and sentences.

  21. 21. VicP says:


    Yes, for scientific work we use electronic clocks. However our biological timing sense does very good adapting to many situations.

    In other articles he uses cartoons and cartoonish, but cartoons are sophisticated technology. If it is something inferior and a cartoon? Well a cartoon as compared to real life? Next time you drive on a freeway consider those vehicles as being navigated and controlled by cartoons? Cartoonish: A typical philosophical reference which is painfully circular.

  22. 22. Michael Baggot says:

    This most recent entry is only returning one response although it claims to have recorded 21. Could you please check it out, thanks.

  23. 23. John Davey says:


    Science means not predetermining what the investigative path might be. It is a job if work and patience, prodding at the peripheries and making incremental empirical discoveries which may eventually allow some bright spark to make a great leap forward.

    The investigative path at the moment involves studying that which is measurable about brains and plodding away at such basics as chemistry and biophysics metrics, with reported mental experiences from subjects.

    We know so little about what brains do – it is ludicrous in the extreme to try to leapfrog the hard work necessary to understand brains -in total, not just consciousness – by attempting what amount to pen-and-paper designs of consciousness by people who are used to designing computer programs and just seem incapable of seeing the blind alley they are waltzing down, more often than not with a sackload of taxpayer’s money.

    In short the investigative path starts with some humility before the scale and complexity of the subject matter. And instead of leaping to human consciousness, perhaps we could start with bees, whose brains are far smaller than humans but about which we know nothing.

  24. 24. Stephen says:

    John Davey:
    “Science means not predetermining what the investigative path might be.”

    I can’t agree with you on this one. In your previous post (now disappeared, hopefully to return) you mentioned all the things that shouldn’t be investigated, seemingly leaving nothing to investigate. Now here you say it will somehow just appear. In science, you have to make a proposal before you start investigating. I’m thoroughly confused.

    Further down in you post you suggest we study bee brains. Of course, researchers are studying the brains of fruit flies and other small critters for just the reason you mention, they are simpler. No problem with that, although it might require some of the things you earlier didn’t want to include in research.

    Generally speaking, I can agree that we probably aren’t going to understand consciousness until we understand more about the workings of the brain. That was in one of my posts above. That doesn’t mean researchers or philosophers should’t muse about how consciousness occurs. As far as I know, things tend to be studied on a broad front and gel into something clear through serendipitous research. It usually doesn’t come from a linear “start small and work your way up to the most complex”, although that approach is always happening in parallel and is useful.

    I think by cartoons, Graziano means schema that have simplifications from reality as well as exaggerations of various parts that may not correspond well to reality. Personally, I think of them as useful abstractions.

  25. 25. John Davey says:


    Let’s look at Newton. He understood the operation of gravity but was as perplexed as anybody else as to its origins. It took two hundred years of physics to evolve before Einstein was able to make the theoretical leap – on the back of far more knowledge – that permitted some explanation of the origin of gravitation.

    Newton didn’t waste much time on discussing the origin of gravitation because he couldn’t. There was nothing to go on.

    It appears to me that that you are suggesting that Newton should have ‘made suggestions’ about the causes of gravity simply because he could. That he should have spent lots of time on it merely because he could. But it would have been wasted time of course, as he never could have got there because there wasn’t sufficient knowledge, or anything like, at the time.

    As the vernacular goes, you can only walk after you’ve learned how to crawl. In neurological terms we’re barely out of the afterbirth.

    How about trying to work out how the 128 neurons of a nematode enable it live ? Thus far it’s proved too complicated. 128 neurons. How is a general theory of human brains going to emerge when nematodes can’t be cracked ? We don’t have the science, the measurements. That means we don’t have the evolved conceptual apparatus from which big leaps can be made.

    We are stuck with ‘information processing” models and computational analogies arising from a belief that modelling a brain reproduces a mind. It’s a major fault – but like the medievalists who believed on dog people, we do have the apparatus to know that this is an incorrect assumption. Spart from being littered in categiry errors, it’s covered in naive enlightenment propaganda about the importance of ‘thinking’.

    Consciousness is phenomenal and has nothing to do with thinking. The experience of blue is not thinking. Wanting to defacate is not a thought. We know that the brain does far more than thinking – in fact thinking is the least interesting thing a brain does.

    Information models focus on the act of thinking and the contents thereof, and are thereby deeply flawed. They implicitly exclude non-thinking, like feeling and consciousness. That is why they get nowhere and never will.

    So that’s why there is no justification fir using them now. We have reasons to know that they won’t work. The trouble is too many eminent men have written too much on this subject with such gravitas the nonsense simply won’t go away. In this case though it has the effect of positively hindering research, as institutional funding us clearly getting wasted on this.


  26. 26. Arnold Trehub says:


    You have a too pessimistic view of the progress that is/can be made in understanding consciousness by theoretical modeling and empirical testing of neuronal models. Your intuition is challenged by the success of the retinoid model of consciousness in predicting/postdicting many previously inexplicable conscious phenomena. Our understanding of how a brain can represent its volumetric surround from a fixed locus of perspectival origin is the first step.

  27. 27. Arnold Trehub says:

    The results of this experiment were predicted by the neuronal structure and dynamics of the retinoid model of consciousness:

  28. 28. Sergio Graziosi says:

    As some of you surely know, I’m a bit of a conflicted fan of Graziano’s work, and it’s really satisfying to see the interesting reactions here.
    [Peter: what happened to all the disappeared comments? Sorry to add to the pressure…]

    Scott: I’ve read your negative reaction to Graziano’s Attention Schema back then and I was baffled. As usual, I find it hard to distil the core of your argument. Are you able to explain what is the fundamental error of the Attention Schema in a few, plain English sentences? (Emphasis on “plain English”, in the sense that I’d be grateful if you could explain to me as you would explain to a child! Sometimes I really am a bit slow…)

    Jochen, we’ve sparred around these concepts before, and I confess I am still unable to explain the fundamental inversion that one needs to adopt in order to respond to your legitimate worries. At least, I can’t explain it simple terms so to satisfy the kind of request I’ve just made to Scott. I’ll indulge in one more attempt: if I keep trying perhaps one day I’ll find a way to translate my intuition into something intelligible (assuming the intuition itself does make sense).

    Simple story: something touches my skin, this activates nerve terminals, Meissner’s corpuscles and the like, generating electrical signals in sensory nerves. If you accept that, you are already describing what’s going on as some very brute form of modelling: a signal represents something else, in the same way as elements of a model do. The nervous system now carries and therefore contains a representation of something happening/happened in the real world. I would then hope that you’ll agree in concluding that the “meaning” of a given signal ultimately depends on the effects that such a signal will or might have on the perceiving organism. We should also mostly agree in observing that the brain integrates vast amounts of such signals (including those generated in response to internal states) and does so in ways that are specifically attuned at producing appropriate behaviours, where appropriate is understood in terms of survival and reproduction.

    If, for the argument’s sake, you can cut me some slack, and temporarily assume that at least some of the integration activity described above “feels like something” (for just a little longer), what you get is that the brain is likely to join together lots of elemental incoming signals in one or more coherent aggregates (something we normally call binding), in this way, the stuff we originally had difficulties to classify as part of a model (our starting point, a single signal travelling towards the brain), now is joined together in something that we do recognise as model, which can be used to guide behaviour: roar sound comes from a speaker, do nothing/turn down the volume. Or: roar sound is generated by lion coming my way, jump in the car, close the windows and go away / hope for the best. This doesn’t solve the problem of needing a central observer to interpret the model, but, to those who share my intuition, it dissolves the problem. We are already looking inside what constitutes the central observer, and the mechanisms we can find therein are what’s needed to interpret the model (an interpretation which is one and the same as producing hopefully appropriate behaviour). The next step is to then realise that in order to produce behaviour which is appropriate to complex situations, or, if you prefer, behaviour which is appropriate for circumstances that show too much rapid change to allow for evolution to hard-code all of the decision trees into our genes, you need to add memory, namely a trace of what has been modelled in the past, a trace which may then potentially be used to guide future behaviours. Brains are also in the business of building and refining ever more complex, more appropriate, decision trees. Finally, internal states, and the relation between them and the environment are ecologically of primary importance, thus the body itself produces plenty of signals representing internal states, and those form a crucial part of the memories stored, as they are fundamental in identifying the appropriateness of decisions. In this way, our organisms acquires in a series of necessary steps the full range of characteristics that we ascribe to a “central observer” (remember, we are looking inside the central observer from the beginning).
    Crucially, stuff acquires a “something it is like” because/if/when stored memories can be recalled and somehow (faintly) replayed, and because/if/when doing so, they have the potential of creating more traces of what happened, that is, our organism might become able to recall the memory (just a shadow) of what was modelled when it was busy recalling a memory. In such way, metacognition is produced, and with it, the realisation that whatever happens has its own qualitative feeling, made accessible by the trace, the impoverished versions of past models of Self+World which are stored in short, medium and long-term memory (the longer the term, the more impoverished the memory appears to be, by the way).
    There is no mysterious ingredient in here, each passage is at least in principle reducible to physical mechanisms; I believe BBT, Metzinger’s transparent self model, Graziano’s Attention Schema and my little ETC all point at the same overarching interpretation. In this view, qualia seen as “the redness of red” appear to the central observer as intrinsic and ineffable qualities because the stored traces (memories) which enable metacognition don’t include information about how stuff is represented.

    The mysteries come in the picture if you assume some other additional things from the start, for example that any explanation of this sort should contain a single central interpreter (a separate, distinct structure within the organism in question), which will inevitably look homuncular and lead to infinite regression. Other mistakes include assuming that qualia need to be epiphenomenal, or generally that simple mechanisms can’t categorically explain the qualitative side of experience (I regard this as a from-ignorance mistake: not knowing how to explain X doesn’t mean it can’t be done). In each of these cases (and more), accepting such assumptions makes the hard problem unsolvable, reinforcing the mystery instead of dissipating it.


    Experience exists, but not intersubjectively like brains.

    I suppose you can count me among the physicalists who would disagree, but it takes much more to offend me ;-).
    Seriously, though: I was in awe of IIT the first time I actually engaged with it, through this paper in particular. For that version, I had some quibbles and doubts on the grandiose claims, but the overall approach looked to me overwhelmingly impressive and promising. Since then, I grew ever more sceptical, and eventually converged on Graziano’s position and maybe even got a step further. The key point is that equating integration with “what-it’s-likeness” is just thoroughly unwarranted. Thus, the direction I wished IIT to take was one of epistemic humbleness (surprise!): I was hoping Tononi would drop the grandiose claim (integration = inner feeling). Instead one could (should?) accept that Information Integration (exactly as described) is necessary for consciousness, and therefore (in certain systems) strongly correlates to consciousness. This move would save the enormous practical potential of the approach without significant drawbacks. Instead, the path chosen looks like uncompromising pseudoscience to my eyes: IIT V3.0 introduces ad-hoc postulates which don’t immediately follow from the (mostly agreeable) axioms. In particular, the Exclusion postulate says “A mechanism can contribute to consciousness at most one cause-effect repertoire”, a statement used to sort-of justify the final one “Of all overlapping sets of elements, only one set can be conscious”, which in my reading is a clause added to respond to previous criticism and is however much worse than the original problem.
    I find myself unable to avoid thinking that this kind of trajectory is typical of theories that are already degenerating, you add stuff, sub-principles, constraints and special rules, all suspiciously smelling of “ad-hoc solutions” in an effort to accommodate uncomfortable evidence while leaving the core of your pet-theory untouched. In the case of IIT, this started happening even in response to a-priori, theoretical criticism (producing an odd kind of overfitting, a pre-empirical one!), making my expectations for the whole enterprise to drop below zero in the space of one paper (as you would expect, I’m on Scott Aaronson’s side on this one). Still, there are so many good ideas in IIT and I fear they will be wasted or become counter-productive just because the main authors/proponents can’t give up the ambition of solving the whole problem… The disappointment still hurts, and probably shows in my words above.

    So, overall, I’m on board with you and Jochen in saying the theory has a lot of potential, and largely for the same reasons you both mention. However, V3.0 to me looks like an irreversible step in the wrong direction, one which pride an peer pressure will make (have already made?) it close to impossible to rectify. A very sad outcome, and a huge waste of potential.

  29. 29. Arnold Trehub says:

    Sergio: “Crucially, stuff acquires a “something it is like” because/if/when stored memories can be recalled and somehow (faintly) replayed, and because/if/when doing so, they have the potential of creating more traces of what happened, that is, our organism might become able to recall the memory (just a shadow) of what was modelled when it was busy recalling a memory.”

    I don’t think that we can have the idea of “something it is like” unless the memory is represented in a spatio-temporal perspectival relation to something that corresponds our self. Otherwise we simply have preconscious sensory excitation without perception — easily realized, for example, in photo-electronic artifacts.

  30. 30. Sergio Graziosi says:


    I don’t think that we can have the idea of “something it is like” unless the memory is represented in a spatio-temporal perspectival relation to something that corresponds our self.

    I’m with you on this (one reason why IIT doesn’t convince): I’ve tried to include the same concept by saying that “internal states, and the relation between them and the environment are ecologically of primary importance” (e.g. all this relational stuff needs to be remembered).

  31. 31. Stephen says:

    John Davey:

    OK, I see your position now. I think researchers will work on issues where they have ideas, interest, aptitude and training (and, like the rest of us, funding). They don’t necessarily take the next job from a list. This works because no one knows for sure where each investigative path will end up or what ideas it may stimulate for other researchers. There may be a small increment forward, a big leap, closing a once promising path, a blind alley or even no result.

  32. 32. Jochen says:


    If you accept that, you are already describing what’s going on as some very brute form of modelling: a signal represents something else, in the same way as elements of a model do.

    Unfortunately, this is already where I have to take my leave. The signals themselves don’t represent anything sans interpretation—and interpretation is the critical faculty we’re trying to pin down here. Whatever you get from sensory activation can be written as a train of ones and zeros (even if it’s a continuous signal); and those ones and zeros don’t single out any referent a priori.

    The problem is always just this: without a decoder, no string of signals/symbols has meaning; but if we assume a decoder, we fall into homuncular circularity. It’s the reason there’s no way to decipher a text written in an extinct language, if that text is all you have access to; it’s the reason structural realism flounders on Newman’s problem; it’s the reason computational theories of the mind don’t take off.

    After all, if I had written this in linear A, how would you ever have been able to read it? The relationship between symbols/signals and their meanings is wholly arbitrary and conventional, and without knowing the convention, you can’t associate any meaning at all to a set of symbols. No set of symbols represents something intrinsically.
    But if you appeal to convention and interpretation, then you’re even worse off, right down the road to homunculusville.

  33. 33. Tom Clark says:


    “…without a decoder, no string of signals/symbols has meaning; but if we assume a decoder, we fall into homuncular circularity.”

    You pose the question of meaning and reference nicely, but seems to me a covariant system that controls behavior effectively with respect to the world – what the brain is and does – is all there is to intentionality. There need not be an interpreter of the system, which means there’s no regress or circularity. The conventionality and arbitrariness of being a particular system with just this mode of instantiating correspondences doesn’t undercut the objective behavioral success made possible by the correspondences. So if one knows enough about the context in which an extinct language was deployed, and has enough in common with those that used it, a translation might become possible. Or is there more to meaning and other representational, intentional phenomena than having success in behaving appropriately to objects and situations?

  34. 34. Arnold Trehub says:

    Tom: “The conventionality and arbitrariness of being a particular system with just this mode of instantiating correspondences doesn’t undercut the objective behavioral success made possible by the correspondences.”

    Yes. Science values predictive success over intuitive counter arguments. But intuitive counter arguments can lead to revisions of mainstream scientific models if properly formulated and empirically tested.

  35. 35. Jochen says:


    You pose the question of meaning and reference nicely, but seems to me a covariant system that controls behavior effectively with respect to the world – what the brain is and does – is all there is to intentionality.

    But covariance isn’t aboutness—almost any two systems may be brought into covariance by a suitable sort of mapping. After all, any system traverses a set of states s_1, s_2, s_3 and so on; it then only takes a map s_1 –> t_1, s_2 –> t_2, etc., to bring this system into covariance with some system traversing the sequence t_1, t_2,…

    But this doesn’t do anything to establish that the first system is about—or indeed, connected in any but a conventional way to—the second one. To see this, just note that you could substitute the second system with any other system at all (by compounding the map taking states of the first to states of the second system with another map taking states of the second system to states of some third one), or indeed, leave out the second system completely, without changing the picture—all the first system has access to are just its states s_1, s_2, and so on.

    But in intentional cognition, we find that our mental states have a property that reaches beyond themselves, to the intentional object, which is not something that—to the best of my ability to tell—is present in systems merely covarying with one another.

  36. 36. Sergio Graziosi says:

    you keep pushing me off-balance! This time I’m puzzled:

    something touches my skin, this activates nerve terminals, Meissner’s corpuscles and the like, generating electrical signals in sensory nerves. If you accept that, you are already describing what’s going on as some very brute form of modelling: a signal represents something else, in the same way as elements of a model do.


    [T]he meaning of some symbol, or string of symbols, can be construed as that which the presence of the symbol causes an agent to do. These actions may include, but are not limited to, forming a certain mental image, executing a sequence of motions, performing a certain speech act, and so on. The motivation for this pragmatic definition of meaning is the fact that meaningless symbols typically do not elicit any reaction at all in a subject

    So here you ground the meaning of a symbol on what it makes an agent do. I can follow you and wish to do so (with some worries, below).

    (You again)

    The problem is always just this: without a decoder, no string of signals/symbols has meaning; but if we assume a decoder, we fall into homuncular circularity

    Trouble is, I can’t, if left to my own devices, reconcile the two quotes from you. In the M&M paper you espouse the idea that meaning of a symbol is grounded on the actions the symbol (potentially) provokes. If we accept this view, I don’t see any danger of homuncular circularity or infinite regress. In my office, I have the door buzzer installed: if I press a button, an electrical signal is sent to the main door, which makes the door lock open whenever things work as expected. If the meaning of a signal is grounded on the effects it has, the signal generated by the device in my office “means ‘open’ to the door” (with precautionary scare quotes). Whether I go and check if it worked, or whether I have any idea of how the signal is instantiated is irrelevant. To some extent, if ice freezes the lock and the door fails to unlock, the signal still meant “open”, I think. So, where is the risk of circularity here?

    With Tom, I don’t really see a problem. Sure, calling the change in electric potential in the wires connecting the device to the door “a signal” is somewhat arbitrary, but only to an extent: it does happen to be a parsimonious and effective way of describing what’s going on.
    So, I reiterate: whenever we feel that calling something “a signal” is justified, we are “already describing what’s going on as some very brute form of modelling”. I chose my words very carefully (for once!): we are describing things through a given explanatory lens. You may feel compelled to equate this to some categorical ontological claim, but I don’t feel the same.

    Incidentally, this is the problem about your M&M paper that bugs me most, with many apologies for raising it so late and for doing it so hastily (I have saved a longer version of this in some drafts folder of mine, if only I could remember where…). If you do ground meaning in the actions it provokes, you already have escaped the risk of infinite regression (again, Tom seems to agree with me – see the trivial example above), so the one move you make at the beginning of the paper already grants you overall success (to my eyes, as usual I presume something is blinding me), making all the clever stuff that happens below look perhaps unnecessary…
    Furthermore, as much as I do wish to make that same move, I’m not sure the reasons listed in its support are compelling enough. For example, you’ve just told me I can’t do the same move, so I remain with the impression that something escapes me.
    What am I missing?

  37. 37. Tom Clark says:

    Thanks Jochen.

    Re: “we find that our mental states have a property that reaches beyond themselves, to the intentional object, which is not something that—to the best of my ability to tell—is present in systems merely covarying with one another.”

    I’m wondering how this property is detected and/or specified as something objective about mental states, or is it rather more like an accompanying subjective sense or conviction that we know what we’re talking about? A kind of felt anticipation of behavioral success?

    For creatures like us, I’d suggest that semantic co-variance – aboutness – is a complex function of our heritable brain-ware shaped by constant interaction with the world starting from birth – it’s not so “mere.” The causal connections are clear and tight, and that’s what secures the meaning of terms and the intentionality of certain brain states. I’m not sure what more needs to be added to create *real* aboutness, such that the property you speak of comes into existence.

  38. 38. Jochen says:


    In the M&M paper you espouse the idea that meaning of a symbol is grounded on the actions the symbol (potentially) provokes. If we accept this view, I don’t see any danger of homuncular circularity or infinite regress.

    Well, the logic of the paper is exactly that it’s not sufficient to merely ground meaning in action. That’s the point of the Chinese Room-section: there, I demonstrate that the meaning-as-action account allows for transferring meaning to the room’s occupant by having him perform certain actions. But this only works if the occupant himself is already an intentional being—thus, we’re left with a homunculus again.

    Suppose you feed a robot certain symbols, and observe that robot perform certain actions—you can then derive the meaning of those symbols for the robot via these observations. But this needs for the robot, the symbols, and the actions to be present within the purview of your intentional cognition—hence, the robot can’t use the same strategy to ground his own symbols. So there needs to be something more to get rid of the circularity; and that’s the role that my active symbols are supposed to fill.

    So in a sense, the whole point of the M&M paper is that meaning as action is promising, but on its own insufficient—which I hope you now see how it coheres with my earlier remarks here.


    I’m not sure what more needs to be added to create *real* aboutness, such that the property you speak of comes into existence.

    Well, my own proposal of what needs to be added is meaning to. To me, meaning is a three-place relationship—x means y to z, and homuncular worries arise whenever one needs to appeal to an external z in order to ground the meaning of x. This is why I proposed that one needs to ‘collapse’ this relation, such that x means y to that same x—then, there is no room for a homunculus to arise. The symbols themselves are the things that realize their meaning.

    So if something merely covaries with something else, then this doesn’t imply that this covariance means anything to anyone—although an already-intentional agent could use one system as a representation of the other (but to do so, it has to be already intentional, since it needs to cognize both systems, as well as the fact of their covariance). I want to get rid of that user, and hence, the arbitrariness.

  39. 39. VicP says:

    If we’re discussing the constellations in the night sky, the earth is assumed because there is no night wrt a point in space. So the earth or z is assumed.
    However no objective discussion about the night sky is going to tell us anything about the earth that we are grounded in. Logic and math are all grounded in past-present arrows of time and it works great in our own bodies when we are standing on earth or in the Chinese Room. There is a fundamental structure of the nervous system which many do not understand; which is why so many dread the homunculus.

  40. 40. Tom Clark says:


    Cashing out and ultimately eliminating homunculi (including the user) is just the ticket when looking for naturalistic and mechanical/algorithmic reductions of what seem mysterious phenomena, so I appreciate your efforts. If such an account works for intentionality, then we could in principle tell from the outside whether or not there’s meaning present in a system: it becomes an objectively determinable system property. But we might also need to take the system’s history and current environment into account, if, as the dear departed Hilary Putnam put it, “meanings ain’t in the head”. Whereas, if you ask me, qualia don’t depend on external states of affairs to exist, just a sufficiently robust and recursive representational system carrying out its representational work.

  41. 41. Sergio Graziosi says:

    excellent, yes, I think I get it. I knew I was missing something, I blame my own pre-installed ideas for this: given what I believe, I could only jump forward in the direction I would normally follow.
    Will need to let the new insight sink in for a little while, though. I’m not sure we’ll converge, but I’m not sure we won’t either.

  42. 42. Sci says:

    Tangent -> Article with Chomsky on Universal Grammar:

    p.s. Am I the only person who can’t see anything but the most recent 1-2 posts for this comment thread?

  43. 43. Tom Clark says:


    “Am I the only person who can’t see anything but the most recent 1-2 posts for this comment thread?”

    No, same here, rather disconcerting!

  44. 44. Peter says:

    Strange and worrying…

  45. 45. Jochen says:

    Seems like the pre-arranged harmony between our comment posting and the comments appearing has gone out of whack, thus showing that the causal connection between the two only ever was illusory in the first place…

    However, maybe it’s only a theme issue—I have little experience with WordPress, but it seems that the standard tip to work out such trouble is to switch to the standard (twenty sixteen) theme WP offers. Alternatively, perhaps it’s some issue with some plugin?

  46. 46. VicP says:

    There could be phlegm in the WP server?

  47. 47. Peter says:

    the standard tip to work out such trouble is to switch to the standard (twenty sixteen) theme WP offers.

    It could be a theme issue. The one I’m using is quite old and extensively mucked about with by some skilless dork (me). In particular the comment numbering is a terrible kludge.

    Shall I switch and see what happens?

  48. 48. Peter says:

    Blue, argh! Still not seeing anything earlier than Sergio March 24.

    However, I know the earlier comments are still in the database.

  49. 49. Peter says:

    OK, fixed I think. It was dividing comments into pages but failing to provide a link to earlier comments. I take it we prefer a single page.

    For those of you who weren’t here when we morphed over to the default theme, it was pretty horrible. I might do it again for Halloween…

  50. 50. Jochen says:

    Looks like everything’s back to normal on my end, also. Well done!

  51. 51. Sci says:

    Thanks Peter!

  52. 52. John Davey says:



    I’ve always got plenty of time for Chomsky, he just seems to make a lot of sense. Language is as built-in as 90% of all the other stuff we seem to do – amazing how much resistance there is these ideas.

    On the topic of this website he appears to be indifferent, veering towards mysterian, although I suspect he just thinks we need to crawl before we can walk.

  53. 53. Sci says:

    @ John Davey:

    I do suspect Chomsky is a mysterian, based on his lectures & writing. It’s not that he sees consciousness as “phlegm” – which anyway is really just cheap rhetoric from Graziano – but rather that the capacities granted to us by evolution will limit what we can say about the mind.

    Last I checked he thinks the body produces the mind in some way, but even there our definition of the matter that makes up the body is continually advancing. (To the point of utter weirdness I’d add…)

    You might enjoy The Machine, the Ghost, and The Limits of Understanding:

  54. 54. VicP says:

    At 45:50 did the attractive young lady say “Batshit”

  55. 55. VicP says:

    1:06 “Philosophers often appear to want to get answers about humans that we can’t get about insects”..

    Or as I say it is a skills category error that they make.

  56. 56. Sci says:

    For those interested more Chomsky on language (apologies for tangent to everyone else):

Leave a Reply