knightIt had to happen eventually. I decided it was time I nailed my colours to the mast and said what I actually think about consciousness in book form: and here it is (, The Shadow of Consciousness (A Little Less Wrong) has two unusual merits for a book about consciousness: it does not pretend to give the absolute final answer about everything; and more remarkable than that, it features no pictures at all of glowing brains.

Actually it falls into three parts (only metaphorically – this is a sturdy paperback product or a sound Kindle ebook, depending on your choice). The first is a quick and idiosyncratic review of the history of the subject. I begin with consciousness seen as the property of things that move without being pushed (an elegant definition and by no means the worst) and well, after that it gets a bit more complicated.

The underlying theme here is how the question itself has changed over time, and crucially become less a matter of intellectual justifications and more a matter of practical blueprints for robots. The robots are generally misconceived, and may never really work – but the change of perspective has opened up the issues in ways that may be really helpful.

The second part describes and solves the Easy Problem. No, come on. What it really does is look at the unforeseen obstacles that have blocked the path to AI and to a proper understanding of consciousness. I suggest that a series of different, difficult problems are all in the end members of a group, all of which arise out of the inexhaustibility of real-world situations. The hard core of this group is the classical non-computability established for certain problems by Turing, but the Frame Problem, Quine’s indeterminacy of translation, the problem of relevance, and even Hume’s issues with induction, all turn out to be about the inexhaustible complexity of the real world.

I suggest that the brain uses the pre-formal, anomic (rule-free) faculty of recognition to deal with these problems, and that that in turn is founded on two special tools; a pointing ability which we can relate to HP Grice’s concept of natural meaning, and a doubly ambiguous approach to pattern matching which is highlighted by Edelman’s analogy with the immune system.

The third part of the book tackles the Hard Problem. It flails around for quite a while, failing to make much sense of qualia, and finally suggests that in fact there is only one quale; that is, that the special vividness and particularity of real experience which is attributed to qualia is in fact simply down to the haecceity – the ‘thisness’ of real experience. In the classic qualia arguments, I suggest, we miss this partly because we fail to draw the correct distinction between existence and subsistence (honestly the point is not as esoteric as it sounds).

Along the way I draw some conclusions about causality and induction and how our clerkish academic outlook may have led us astray now and then.

Not many theories have rated more than a couple of posts on Conscious Entities, but I must say I’ve rather impressed myself with my own perspicacity, so I’m going to post separately about four of the key ideas in the book, alternating with posts about other stuff. The four ideas are inexhaustibility, pointing, haecceity and reality. Then I promise we can go back to normal.

I’ll close by quoting from the acknowledgements…

… it would be poor-spirited of me indeed not to tip my hat to the regulars at Conscious Entities, my blog, who encouraged and puzzled me in very helpful ways.

Thanks, chaps. Not one of you, I think, will really agree with what I’m saying, and that’s exactly as it should be.


  1. 1. Peter says:

    Fair warning: although it does give an account of the history and several of the main theories, this is ultimately about what I think – it’s not really Conscious Entities in book form. I might go on to do that, though, if not discouraged.

  2. 2. Hunt says:

    I had no idea you were working on this. Congratulations and thanks, just downloaded the kindle version.

  3. 3. Richard J.R.Miles says:

    Well done Peter, read the bit on amazon and looking forward to reading the rest.

  4. 4. Sci says:

    Congrats Peter, looking forward to it!

  5. 5. Tom Clark says:

    Author, author! May all your hypotheses be vindicated.

  6. 6. Vicente says:

    Congratulations Peter!!

    No surprise, I have been expecting the news for quite some time now…

    Can’t wait to read it. Also looking forward to the debates that the announced posts will raise… don’t expect any mercy 😉

  7. 7. Eric Thomson says:

    Congratulations! I look forward to reading about your efforts, just as I always look forward to reading your blog.

  8. 8. Sergio Graziosi says:

    Congratulations Peter!
    I am so looking forward to finally read your fully developed arguments, and disagree! Should be great fun 🙂

  9. 9. Scott Bakker says:

    Got a link up at TPB, and it is being clicked a whole helluva lot more than my own books!

    I have to say I found your short history so clear, so elegant, that I’ll be recommending SoC for some time to come. On a more personal note, I’m green with envy: it’s no easy thing finding those grains of explanation that take you where you need to go without leaving too many people behind. This is the problem that’s been killing me for two months writing the introduction to Through the Brain Darkly.

    Either you’ve given me a inspiring model, or a mocking rebuke. The next couple weeks will tell. How do you feel about a little ghost writing?

    I already have a number of questions regarding your positive account. Would you be amenable to putting up a précis here Peter? Something we could use as a forum to debate your ideas?

  10. 10. Richard J.R.Miles says:

    Scott, I agree with you Peter has a clear elegant writing skill, I only needed the dictionary a couple of times with this book which was good for me. Some writers are so wordy they often confuse theirselves as well as others. K.I.S.S. is an ability to be admired.

  11. 11. Peter says:

    Many thanks indeed, Scott!

    The four posts (first coming soon)I have in mind on the main ideas from SoC are meant to provide a forum for discussion, but if people would also or instead like a general post or page, I’d be more than happy to do that.

  12. 12. Arnold Trehub says:

    Peter, how would you describe the difference between “thisness” and “thatness”? I must say that your slim volume is an excellent concise overview of the topic.

  13. 13. Peter says:

    Thanks, Arnold. I don’t think I’d try to describe that difference; it’s just specificity I’m really on about.

  14. 14. Don Salmon says:

    Well, yes congratulations too – but I’m a bit confused. I don’t quite see what this has to do with consciousness as that IN which all this appears.

  15. 15. john davey says:

    well done peter you put a lot of work into this blog hopefully it get syou some recognition

  16. 16. Abalieno says:

    Hello there.

    I’m past the halfway point in the book and wondering if there’s someone who wants to engage with a few ideas. The problem I have is with the lengthy part dealing with the “easy problem” at that middle point…

    Now I have a very simplistic point of view on that argument, but it seems to be explaining quite well all the problems, including intentionality or moral responsibility. At the same time the “evidence” in those pages seems to build toward some “special” property of the brain that has been very difficult to pinpoint. That’s the thesis, right? It seems you go step by step to analyze problems like non-computability and emergence, all leading to the conclusion that the brain works in some special way that made it slippery under the eye of Science. Unable to quite get it for what it is.

    Yet, the evidence of that, to me, looks very unconvincing. In fact I see evidence of the opposite: that the brain poses a problem that isn’t qualitatively special. It’s merely very complex. Why? The simple fact that we still don’t have an even decent simulation of brain thought, or consciousness, isn’t a fact that proves anything, on its own. Let’s take another example of something that has a 100% clear mechanistic explanation: movement. Is there something in human movement that hasn’t be completely explained, or that is “special” in an ephemeral way like consciousness? I don’t think so. Then, it’s safe to assume that we should be able to reproduce human life-like movement very easily. Right? So look at the modern progress with robots and the like. Look how they move. It’s utterly disappointing. They are very, very far from human-like movement. Those modern robots are still extremely clunky, and in the few advanced cases where they can manage to keep balance and run, they still don’t look like anything human. They are misshapen in some weird ways.

    So the point is that even in something that is 100% understood, 100% mechanical, we still SUCK at reproducing it. We are still a LONG way to get accuracy and valid imitation. Why then it should be different for something exponentially more complex as the “brain”? Why should we expect things being any different? Or why should we deduce that the reason we don’t have a functional model of the brain is not because of high complexity, but because the brain works in a “special”, “obscure” way?

    It seems to me there isn’t anything obscure. It’s just high complexity. You know that popular example of Chaos theory about the butterfly effect? That’s my explanation: complexity. I do not accept that a concept like “emergence” can exist within Science. The very idea that there are arbitrary “levels” where new phenomena appear that would be otherwise invisible if you merely observed the small parts, is absolutely ABSURD. You might as well believe in gods and spirits. So what’s “emergence”? It’s a human tool. It’s an abstraction we use to simplify complexity, reduce it to a level we can play with. Whenever we FAIL at tracking the complex chain of cause/effect, the picture blurs, so the gestalt we obtain is what we call “emergence”. Emergence being merely a failure of our models. A lack of accuracy and knowledge.

    Let’s say we’re trying to anticipate tomorrow’s weather: does our model include the butterfly effect? Of course NOT. And that’s why prediction models about the weather are only accurate to a certain point. In order to be PERFECT they’d need to model the WHOLE of reality, because in reality everything has a consequence on everything else. You cannot simply abstract and isolate some relevant parts, model them, and claim this model is accurate. It would be only partial. Reality isn’t made of mechanisms.

    Why should the brain be anything different? Why should we expect to build a working model of it by REDUCING and ABSTRACTING it to a context made entirely of a “room”, a “bomb” and a “suitcase”? No human being and no human experience exist in that total absence of complexity. Why should be expecting our models to work if we erase all that complexity?

    The next step is that I have a little theory, extremely simplistic, but that helps framing (and solving) all the big issues, as I said at the beginning. It goes back to the dualism well explained in the book and summarized effectively with: third-person (science), first-person (human condition).

    What is, concretely, this dualism? Occluded horizon of information. Partial information. If a system has “100 information”, and the system is closed, then if you only have access to 10 of that information it means you have a partial access. That’s the brain and reality. That’s emergence. Emergence is the reason why we think a thunderstorm is sent by a god. Why? Because we can’t track cause/effect, so what appears to us is a GHOST OF VOLITION: a god. The same as what happens with the weather, happens in the brain. Since we cannot track brain thought through introspection, and since consciousness is an illusion due to this very terrible self-perception, then it means we “appropriate” a process. We see it as a truncated gestalt. That means that the cause/effect chain becomes dark past a certain point. What remains visible is “consciousness”, being merely the very end of a process. Its tail. The same as the thunderstorm is cut from cause/effect, so is the human thought, and in the same way we see a god in the thunderstorm, so we see consciousness in the brain. The tail of a process.

    Going past this illusory, false perception is only possible through science. As it was the case for all previous perceptive illusions, like the evidence of the fact the sun was running around the earth. What is science? The third-person. The voice from outside that whispers us the truth. Reality as it is, opposed to reality as it is experienced. It’s a totalizing perspective that “explains away” the first person. Explains away consciousness.

    But is that so?

    There is a reason why we haven’t succeeded at simulating consciousness, and why we’ll NEVER succeed: consciousness is first person. I can be convinced of my own, but YOU cannot prove to me YOUR consciousness. So how we expect to create an artificial consciousness when we still can’t PROVE that another human being has it? The qualia of experience is a first-person that CAN’T BE VERIFIED by another first-person. We hit a conceptual wall and my idea explains what this wall is: partial information.

    How can you solve dualism and escape the first person? By creating some kind of machine that can oversee and analyze ALL reality. All cause and effect. All particles in the system of the world. A complete model of reality. But let’s say this machine is possible. Once you built it you are in need of creating another machine, as big as the first, because in order to really close a fist on the system of reality you also need to include in the model the machine you just built. And so to track the function of that machine you need another one as big as complex as the first. But then you’d need a THIRD machine, that tracks the second you built. What’s the result here? The russian dolls models. What we have obtained is another manifestation of Godel paradox: you cannot close an ontology. You cannot completely control a formal system UNLESS you exit it. UNLESS there’s no self-representation that needs to be included in the model itself.

    This is the wall that defines human experience. Breaching this wall, from first to third person perspective is IMPOSSIBLE. This rule is normative of reality. Transcending human experience means being a god: the entity that looks at a reality from OUTSIDE. As if reality is created and sealed. A god is the entity that doesn’t need to self-include within a normative model.

    So what’s again “human condition”, and why it is a DEFINITIVE condition instead of a transitory one? Partial access to information. Human experience is a slice of the world. Being a slice, it is partial. A first-person, a point of view.

    Why this solves complex problems like moral responsibility? Because if on one side we know that we are caged in cause/effect, we also know that a human being as we know it is made of his own experience. Meaning that this specific body is a bundle of partial information, that is different from another body with its own bundle of experience. If this bundle here has killed some guy, then it means that the information contained there is mostly “bad”.

    What we do with it? First, we avoid thinking in terms of “blame”, “guilt” and “punishment”. What we can do is instead think in terms of education. Education is a way to inject some good information into that bundle that contains a lot of badness. Injecting good parts so that they outnumber the bad ones, to the point that the guy starts working properly. And can be sent out again. Is that guy broken beyond repair? Then you keep him in jail. It means the education system isn’t good enough to cover that case. We should try to improve it.

    Can we discuss this somewhere at some point?

  17. 17. Arnold Trehub says:

    Abalieno: “The qualia of experience is a first-person that CAN’T BE VERIFIED by another first-person. We hit a conceptual wall and my idea explains what this wall is: partial information.”

    You might want to read “A Foundation for the Scientific Study of Consciousness”, on my Research Gate page here:

  18. 18. Abalieno says:

    If I read that correctly (and I might not), what you write there is about reconciling the first person with the third.

    You are looking from the scientific Point of View. It means you try to reconcile, or explain, the first person within the third one.

    That already happens. I wrote about it above: the third person description is (eventually) able to “explain away” the first person like a cognitive illusion. Of course this is a simplistic explanation that satisfies myself, whereas a scientist would pretend much more insight in how it all happens, and come up with a very detailed description of every aspect.

    But the bottom line is that, in third person perspective, the first person merely doesn’t exist. The same as in science the presence of a god isn’t necessary: everything is explained without the need of a god.

    The thing I pointed out is instead that from a first person I can’t verify another first person.

    You might do that through some convoluted process that passes through the third person (the bridging you seem to write about), but the third person is ultimately unachievable for a human being (I would go there, but that’s about the domain of Free Will and before I can discuss that I need to know we agree on the basis I put above).

    Though I wonder why you rely so much on “subjectivity”. In my description subjectivity is merely what happens when you lose track of cause/effect. You lose the chain, and so what’s left is a truncated process you call “subjective” because it appears as if isolated from the rest (and so you appropriate it and call it “yours”).

    But from the scientific point of view IT IS NOT. Subjectivity is also explained away in the scientific approach. It’s an illusion like the rest of consciousness.

  19. 19. Abalieno says:

    I’ll try to summarize also the rest of my idea so that one could see where it all leads. But I’ll leave some gaps that I’ll only explain if someone wants to actually discuss this in detail…

    Spoiler: the destination is about finding a situation where determinism is compatible with Free Will.

    Postulate (that should be intuitive enough to be accepted by everyone): an illusion can be rightly defined “illusory” when we can acquire information, integrate it, and correct perception so that illusion is properly flagged as “illusory” once we reach a new place we call “Truth”.

    Of course pragmatically a truth is only relative to a context, since we can reach a truth only when the context is relatively stable. An illusion can only be proven wrong when we can make a correction. If we can’t, we’ll have to settle to call it “truth”.

    On the third person (Science) we assume we (eventually) have a 100% complete description of the world as a deterministic system. This means that all problems like consciousness have been explained away and properly defined as cognitive illusions.

    Now the point is that knowledge about an illusion doesn’t alter perception: you still see the illusion, the difference is now you know you’re seeing it wrong.

    The “special” problem with human experience is that knowledge about this specific illusion hits an hard normative wall that PREVENTS a correction (look at the bottom for an explanation of why it’s a special case). It means we “know” there’s an illusion and how it works, but we are unable to produce a change. It’s information that cannot be integrated. And information that produces no change is no information. It literally VANISHES.

    The result of this process is that a weird dualism is re-created: one one side we have a third person that makes possible a complete explanation of reality, including a description of consciousness as a cognitive illusion. But on the other side this knowledge doesn’t produce an effect. A consequence. We obtain a scenario where the illusion is framed and explained, but the leap toward the truth isn’t happening. It’s like waiting for the end of the world predicted by Maya. You wait, and nothing happens.

    So we are stuck in the illusion itself WHILE we know the truth. And, as previously stated, a truth is only relative to a context. If we can’t achieve a position, if that position is only theoretical (like the idea of a god) then we can brand it a “myth”. A fantasy, while we can define out own immutable reality a “Truth”.

    The table is turned.

    That’s the conclusion: we “know” that human perception is illusory, but, since we cannot act on this knowledge, we obtain a split between knowledge and un-correctable experience. A dualism. And this dualism makes possible to have a deterministic, complete description of reality on one side, while granting us Free Will on the other.

    The bigger truth (third person, determinism) invalidates the smaller one (first person, Free Will). They are, as we all know, incompatible. But since human beings can’t achieve the third person, and are bound to the first, then it means that the bigger Truth vanishes into myth. And we only have left the smaller one: Free Will.

    Of course, if you still agree with that process I described, the interesting part would be to explain why science allows us to acquire knowledge that doesn’t produce change. Why is this case special, compared to other illusions that can instead be surpassed as soon they are revealed?

    Because “Science” is a theoretical abstraction. LITERALLY A MYTH. For “Science” we intend a description of reality we might eventually achieve. It’s an ideal. Being an ideal, compared to some concrete device or category, it means it acquires certain properties… As I explained above, a complete description of reality is only possible when it exists outside the system it wants to describe. This is exactly what happens with the ideal of Science: it’s a theoretical idea positioned right outside the system of reality.

    Yet, because we imply determinism (determinism means the system is closed) then we imply that nothing can exist outside (otherwise you’d need to account for it, and so “bring it in” the system, then triggering Godel paradox and being once again unable to complete the description).

    This produces an evident contradiction, and this contradiction is THE REASON why this knowledge is “special”. It’s theoretical knowledge that comes from an impossible place: outside the system.

    And because this knowledge is outside the system, it cannot be brought in. We can’t use it. The moment we try to use this information, the information vanishes. It’s a blank page. It’s like dreaming, a god giving you a piece of paper that reveals an incredible truth. But then you wake up, look at the paper and find it blank. You cannot return with that truth. The moment you pass the threshold is the moment that knowledge ceases to exist.

    Hence: dualism. Science is the voice from the outside. A mythological god whispering unachievable truths.

    At this point you can even attempt to fix the contradiction. But either you have a powerful Science, that gives you eldritch, gibberish knowledge that you can’t use. Or weak Science that is very useful but still won’t let you exit out the first person, aka human experience.

  20. 20. Arnold Trehub says:


    In my 3pp scientific formulation subjectivity (1 pp) is constituted by a brain mechanism with a particular kind of neuronal structure and dynamics. A detailed theoretical model of this mechanism enables one to successfully predict conscious experiences that were previously inexplicable.

  21. 21. Abalieno says:

    Your thing is too technical for me to deduce from it usable information.

    The only elements I can extract from there is that you expect to use some sort of neuro-imaging to take pictures of the brain, associate them with certain conscious states, and then when these things repeat you’d have the sign that the same stuff is going on and so you “verify” that this process is happening right now in that brain.

    But your definitions of “consciousness” and “subjectivity” there rely on some external other definitions that aren’t clear at all to me. And when you redefine common words to give them some specific technical meaning it means it’s then hard to use them in a discussion.

    Beside that, I cannot even follow the basic principles because there’s too much that is unsaid, and very complex to deal with.

    For example:
    1. Some descriptions are made public; i.e., in the 3rd person domain (3 pp).
    2. Some descriptions remain private; i.e., in the 1st person domain (1 pp).

    I’d argue that no descriptions ever exist in 3rd person domain. It’s human beings that handle descriptions. I can communicate my description to you, you can handle it. But it’s an exchange of information that happens between and exclusively within a 1st person. There is no actual 3rd person there. I cannot put a description on top of a table, nor I can communicate it to a dog, or a rock. And that’s why all dualistic theories are a mess: two domains can’t coexist. They are in contradiction with each other.

    So that kind of field is way too tricky to deal with and give for granted.

    And that’s also why the only scientific description of consciousness I tend to trust is the one that eliminates it. From the third person there is no consciousness because from the third person there CAN’T be dualism.

    Then, I also do not understand how your description of how 3D vision works is a “proof” of a consciousness. Couldn’t the same mechanism exist in an unconscious brain as well? I mean, if I write a computer program that uses a mechanical eye, and it all reproduces your model, do I obtain a “consciousness”?

    “This experiment demonstrates that the human brain has biological machinery that can
    transform a 2D layout of objects in the physical world into a 3D layout in the
    person?s phenomenal world.”

    Yes. It could even happen in a computer game, where some monster uses some sort of mechanism to represent the space around itself, and calculate a path. Is that a consciousness? Is that subjectivity?

    I do not doubt that in that paper you explain some true property of the brain. But I do not understand how it is related to what we consider consciousness, or subjectivity.

  22. 22. Arnold Trehub says:

    Abalieno: ” I cannot put a description on top of a table …” .

    Of course you can! You just put some descriptions on my computer screen on top of my table.

  23. 23. Peter says:


    It’s not so much that I think our failures show there must be a mystery: I think a number of failures in several areas show similar features, which point to a common problem with a common answer, which I try to describe. In short, a lot of problems come from dealing with inexhaustibility, and I think the answer is pointing.
    I think you do basically get that, but you believe that the problem ultimately comes down to complexity: our attempts so far just haven’t matched the complexity of the challenge. That is a tenable view; the book is my attempt to make my own alternative diagnosis look convincing.
    Some of the things you are saying, especially about ‘Occluded horizon of information. Partial information’ sound reminiscent of Scott Bakker and his Blind Brain Theory – don’t know whether you’ve given his stuff a look.

  24. 24. Arnold Trehub says:

    Abalieno: “Couldn’t the same mechanism exist in an unconscious brain as well? I mean, if I write a computer program that uses a mechanical eye, and it all reproduces your model, do I obtain a ‘consciousness’?”

    No computer program can reproduce the retinoid model. And as far as I can determine, no known artifact has a physical component that has an analog representation of the volumetric space in which it exists including a fixed locus of perspectival origin within this structure. In retinoid theory, these properties define subjectivity/consciousness in a living brain.

  25. 25. Abalieno says:

    Peter, I came here because I’m one of Bakker minions, that’s where I saw your book 🙂

    Of course a lot of what I’ve read of him feeds most of my ideas.

    About complexity, I think it’s the answer to lots of things we can’t understand. Including emergence. When it comes to the brain specifically, my guess is that it’s a form of complexity that is slippery: second-order observation.

    The “strange loops” of Hofstadter are based on a sort of circular thinking and self-reference. That’s what I think is a basic pattern that is necessary for consciousness, and that’s also what, likely, give people the competence to solve problems that computers can’t.

    Observing an observation is what allows one to do that kind of pattern-matching.

    A process observing a process is that special something.

    Arnold, of course you can put some description on a table, but it’s only relevant if someone reads it and knows what it means. So it still only works when a first person deals with it. In the third person world a written page has no “meaning” (since meaning requires a consciousness, after all).

    On a basic level, third and first persons aren’t compatible. Same as dualism. They are contradictory, or better: one contains already a description of the other, without needing the other. Each “explains away” the other. (if you don’t believe that a first person can exist independently from a real world outside read about Constructivism)

    Same as the problem of intentionality. If a written page had intentionality on its own, without the need of being read, then we’d have a description of intentionality directly in science / third person. But since instead intentionality isn’t defined, then a page acquires intentionality only when read.

    “No computer program can reproduce the retinoid model. And as far as I can determine, no known artifact has a physical component that has an analog representation of the volumetric space in which it exists including a fixed locus of perspectival origin within this structure. In retinoid theory, these properties define subjectivity/consciousness in a living brain.”

    I’m very skeptical about this. Nothing of what you say seems all that impossible to reproduce. Nor I think that if we can reproduce it we’d obtain an obvious artificial conscious entity.

    “a physical component that has an analog representation of the volumetric space in which it exists including a fixed locus of perspectival origin within this structure”

    This part doesn’t look like anything too hard to reproduce. Why would it need being physical? A data structure in a computer can have an almost-analog representation of volumetric space. At that point you just need to add an “origin”, a self-representation.

    I’m just not sure how this would create a living, thinking being 😉

  26. 26. Arnold Trehub says:

    Abalieno: “In the third person world a written page has no “meaning” (since meaning requires a consciousness, after all).”

    The third-person world is the public world in which each first-person has conscious access to the same physical object/event and can express an opinion about its meaning. See Fig. 1 in “A foundation for the scientific study of consciousness” on my Research Gate page.

    Abalieno: “This part [retinoid space] doesn’t look like anything too hard to reproduce.”

    I have asked many knowledgeable people, and none have been able to point to such an artifact or make any practical suggestion as to how one might be built. On matters of consciousness, biology is far ahead of engineering.

  27. 27. Sci says:

    I like this quote Arnold:

    “I have asked many knowledgeable people, and none have been able to point to such an artifact or make any practical suggestion as to how one might be built. On matters of consciousness, biology is far ahead of engineering.”

    It goes back to something John Davey said – we should really be letting biologists catch up to some of the interesting possibilities in their field instead of thinking we’re going to crack problems of subjectivity and rationality in a few decades.

    Philosophers are always going to pick a paradigm and hawk it like some cheap market seller – it’s just what they do, and in its own way this kind of thing can be useful. (And of course you have scientists like Tegmark telling us we’re all made of maths, or Graziano advocating puppet consciousness…yet they also have had support from philosophers. Tegmark is in line with ontic structural realism and Graziano apparently consulted the Churchlands…and both ended up with hilarious conclusions…)

    Science can thankfully be ontologically neutral, following the theories which either provide explanatory value to previous discoveries or which have made unexpected successful predictions about the nature of our world.

  28. 28. Sci says:

    *apologies that should be subjectivity and intentionality.

  29. 29. Abalieno says:

    Well, in a discussion saying simply an argument is valid because someone knowledgeable said so is a non-argument.

    We are discussing WHY. Not who.

    “none have been able to point to such an artifact or make any practical suggestion as to how one might be built”

    But why?

    Why do they think that it’s not possible to make an artifact that follows the same properties?

    If the theory is valid, then it is reproducible. If for some reason you cannot do that, then THIS is the big theme.

    Why is it not reproducible? What makes it hard?

    Those questions are the important ones that are being skirted.

  30. 30. Abalieno says:

    If it’s not too bothersome I’ll continue to write here some comments from my own angle, as I finish to read the book.

    The pages about Searle and McGinn were interesting because of how they circle around the truth without actually providing a satisfying answer. They only give you that itch to know more, always leading to some formulation of the basic, apparently unanswerable questions.

    About Searle, we are back to the idea of creating a huge database in order to reproduce the “outputs” of a consciousness. But the approach is very obviously flawed even if it was possible to complete it. If you mimic successfully a behavior you can manage to fool recognition, but this type of approach can be used even for much less “hard” problems, and it’s an approach that would have the same limits.

    We are more interested to figure out how a brain works, more than simply ape one. Even if the process could be completed (and it cannot) you’d have one simil-brain. If you wanted to then create a different one, you’d basically have to rebuild a whole database with all the new outputs. This is, at the very least, an incredibly inefficient process that leads nowhere useful.

    But that’s not the interesting part. The interesting part is how it all leads to the problems in the center:

    “that’s just stuff moving around; how can it even generate meaningful thought?

    Yet we know that shuffling minute bioelectrical impulses around within our brain somehow does the trick.”

    The problem is: semantics. Meaning coming out of formal symbols. What is, then, this actually about? It’s again the dichotomy. Semantics means dichotomy, between symbol and meaning.

    The existence of language DEPENDS on the existence of a dichotomy. You have to accept this hard point because it’s the very premise that unlocks the whole puzzle. If, like in Searle’s example, you are stuck on one level, the level of formal rules, you’ll never have “semantics”, because semantics require TWO levels. A dichotomy. So whenever you instead analyze things from one level, you only see that one level, the other disappears.

    The bottom line: meaning requires a dichotomy. The dichotomy comes out of self-observation, a system that recognizes itself as separate from environment. When this happens you obtain a division between inside/outside (so two things). So identity, and an idea of “self”. In order for meaning to exist you need to trace that line, and that line becomes then a barrier (Kabbalah gives it a name I forgot, but it’s interesting because it represents the exact thing).

    If you abstract all this: we have always two domains. Two worlds. One inside the other, organized with a hierarchy. (why I say something so metaphysical will be clear later)

    Now McGinn, it mostly sounds absurd, but it’s interesting because of the words being used. Consciousness is outside human understanding because of “cognitive closure”.

    That’s again circling the truth. He’s onto something, but without the key to unlock that thing. He just knows something is there, but cannot quite grasp it. Cannot pinpoint it.

    The key’s here:
    “God, or perhaps even some alien being whose mind worked another way, would have no trouble understanding consciousness; but the rails on which our minds run just stop us ever getting to the right place.”

    Why this is the key? Because the “systemic closure” is exactly the real thing we deal with here. There’s a wall. A barrier (see above). A threshold dividing two worlds. Where one contains the other.

    Read that I just wrote. What does it remind you of? That’s Godel.

    The important part of Godel paradox is the *pattern* it shows. Reformulating: in order to be able to close a formal system, you need to sit outside of it. Otherwise you cannot create a formal system that contains you, giving the rules (so a forma system that self-contains). When you are outside, the system can be closed, formally. When you are inside you need to include yourself, recursively, and the system never completely closes. It stays open.

    So this brings to the quote above: god can understand consciousness because god is that abstract entity whose property is “staying outside” a formal system. Like a computer programmer is a god to his own program, simply by the property of being outside, looking in. Omniscience and omnipotence are faculties that apply to an entity that is outside a system.

    Let’s return to the original level. Given a cognitive closure, how can you step outside, in order to understand it? The book says that if this is true, we have no leverage at all:
    “The same limitations that prevent us understanding consciousness properly would also stop us noticing that there was anything mysterious about it.”

    Yet this happens, why? Because science is exactly that voice from the outside, god-like, that tells us what we would otherwise wouldn’t know. The cognitive closure is total, but we still hear the voice of science that breaks that closure. Factually. It’s the science.

    The book goes on, and it actually nails the conclusion:
    “It is logically possible that we live in a world like that, where some features are just completely hidden from us and no trailing ends can be noticed, but there is no point in adopting the hypothesis since we can by definition go nowhere with it.”

    Yep, if there’s a cognitive closure, then it’s all a vain effort, because that closure is “virtuous” (opposed to virtual, and so with possibility of transcendence). It establishes the rules of the game we play.

    Fowles again explains just the same. The playing field:
    “We are in the best possible situation because everywhere, blow the surface, we do not know; we shall never know why; we shall never know tomorrow; we shall never know a god or if there is a god; we shall never even know ourselves. This mysterious wall round our world and our perception of it is not there to frustrate us but to train us back to the now, to life, to our time being.”

    That’s again the mention of a wall. The “cognitive closure”. Godel says that in order to close a system you need to be outside of it. So domains within domains. Hierarchies of worlds. But let’s say that, because we hear the voice of science, and science represents a domain outside out cognitive closure, so we can actually surpass our own closure. What happens?

    Eliminativism (the voice of science, the mind as described from the outside). Consciousness empties itself out. Moves “upstream”, to wherever the new point of origin is.

    Otherwise, the cognitive closure, as a soft, perceptual barrier, represents simply an ideal limit to create the (illusion) of a point of origin. The same thing we otherwise call: Free Will.

    A point of origin. And only a form of dualism creates the premises to have another point of origin that isn’t strictly outside of human grasp.

  31. 31. Sci says:

    I feel like I missed something – Why is eliminativism the voice of science?

    Outside of that, it’s an interesting argument. Yet I don’t think it’s convincing without the actual evidence – in that it’s just the same as all the other philosophical -isms that’ve been offered up across history.

  32. 32. Abalieno says:

    That’s where I think science will lead to. As I wrote in some other comments a scientific description is a description from the outside, so eliminating the dualism. Consciousness will be “explained away”.

    There is no consciousness when you look at it from the outside.

    I also don’t think there’s anything philosophical to this scheme. It’s just a way to organize the information we already have. I use it because it explains in a simple way all the problematic questions and contradictions that arise when you deal with this stuff.

    The core of the idea is still based on the “Godel, Escher, Bach” book and a bit of information theory and language. So there isn’t anything controversial about it. It’s just how I brought certain themes together to underline how there are recurring patterns.

    For example Bakker own BBT is 100% compatible with what I say. I only give it a broader context.

  33. 33. Jochen says:

    So, Peter, did you notice Sean Carroll gives you a nod in his new book, The Big Picture? He quotes your characterization of the problems of consciousness as ‘the Easy Problem (which is hard), and the Hard Problem (which is impossible)’…

  34. 34. Peter says:

    Ah, thanks, Jochen. I hadn’t seen that – fame at last!

  35. 35. Jochen says:

    He should have paid a little more attention to what you say in the rest of the book, too, though.

  36. 36. Patrick Kenny says:


    I greatly enjoyed your book. I had no idea it was possible to write so clearly and entertainingly about this subject. I would love to write a favourable review if I didn’t have so many disagreements with it e.g. on the hard problem:

    On the other hand, I really like your take on free will. Has this idea been articulated by other philosophers? The neuroscientists Anil Seth and Karl Friston seem to be working on similar lines.

  37. 37. Peter says:

    Many thanks, Patrick. Sorry we don’t quite agree on the Hard Problem (FWIW I don’t disagree at all with what you say, except I think what you’re talking about is a different problem from the one defined by David Chalmers and others).

    I don’t know of an account of free will exactly like mine, but this is extremely well-trodden ground, where the chances of real originality are pretty low!

Leave a Reply