CrazyistEric Schwitzgebel has done a TEDx talk setting out his doctrine of crazyism. The audio is not top quality, but if you prefer there is a full exposition and a”a short, folksy version” available here.

The claim, briefly is that any successful theory of consciousness – in fact, any metaphysical theory – will have to include some core elements that are clearly crazy. “Crazy” is taken to describe any thesis which conflicts with common sense, and which we have no strong epistemic reason for accepting.

Schwitzgebel defends this thesis mainly by surveying the range of options available in philosophy of mind and pointing out that all of them – even those which set out to be pragmatic or commonsensical – imply propositions which are demonstrably crazy. I think he’s right about this; I’ve observed myself in the past that for any given theory of consciousness the strongest arguments are always the ones against: unnervingly this somehow remains true even when one theory is essentially just the negation of another theory. Schwitzgebel suggests we should never think that our own preferred view is, on balance, more likely than the combination of the alternatives – we should always give less than 50% credence to our preferred view or, if you like, never quite believe anything.

I won’t recapitulate Schwitzgebel’s case here, but it did provoke me to wonder about the issue of what we would find acceptable as an answer to the problem of consciousness. It’s certainly true that some theories would not, as it were, be crazy enough to appeal. Suppose the Blue Brain project triumphed, and delivered a brain simulated down to neuronal level. We could run the simulation and predict the brain’s behaviour; for anything the simulated person said or thought, we could give a complete neuronal specification, and in that sense a complete explanation. But it wouldn’t seem that that really answered any of the deeper questions.

Equally, for all those theories that tell us there’s really nothing to explain, our consciousness and our selfhood are just delusions generated by aspects of the mental mechanism, one problem is that the answer seems too easy (though in another sense these views are surely crazy enough). We don’t want to be told to move along, nothing to see here, folks; what we want is an “Aha!” moment, a theory that makes things suddenly fall into places where they make dramatic new sense. How do we get such moments?

I think they come from a translation or bridge that lets us see how one understood realm transfers across into another realm which is also understood but not connected. Maybe we find out that Hesperus is Phosphorus and that both are in fact the planet Venus; then the strange behaviour of the evening star and the morning star suddenly make more sense. Another related way of generating the Aha! is to discover that we have been conceptualising things wrongly: that two things we thought were separate are really aspects of the same thing, or that a thing we took to be a single phenomenon is actually two different things we have conflated together: temperature and heat, for example.

It certainly looks as if consciousness is right for an Aha! of those kinds – we have the two separate realms, the mental and the physical, all ready to go.  But Colin McGinn has argues that the very distinctness of the realms means that no explanation can ever be forthcoming, and many people since Brentano have shared the sense that no bridging or reshuffling of concepts is even conceivable. The thing is, we don’t get the kind of paradigm shift we need by labouring away within the existing framework: we need something to jolt us out of it and there’s no telling what. We know now that in order to come up with the theory that speciation occurs through differential survival of the fittest, you needed to visit the tropics and collect a lot of examples of local fauna, read Malthus and then fall ill. Darwin and Wallace both had their dogmatic slumbers shaken up in this way; but it was not evident in advance that that was what it took. Perhaps even now a young doctor who has treated schizophrenics, has had the required motorbike accident and is just about to read the text on encryption which is an essential precursors to the Theory.

I sort of hope and believe that something like that is the case and that when the Theory is available we shall see that one of those theses that look crazy now are not quite what we thought: so crazyism will turn out to be false, or at any rate only provisional. But Schwitzgebel’s essential pessimism could turn out to be justified. We could end up with a theory like quantum mechanics, which seems to do the job so far as anyone can see, but which just refuses to click with our brains positively enough for the Aha! moment.

Schwitzgebel doesn’t spend much time on the wider claim that metaphysics as a whole is crazy, but it’s an interesting possibility that the problem doesn’t really lie with philosophy of mind but something altogether deeper. Maybe we need to look away from the mind as such and spend some time on… what? Causality? Basic ontology? Once again, I have no idea.

75 Comments

  1. 1. Arnold Trehub says:

    Schwitzgebel: ” A position is “crazy” in the intended sense if it is contrary to common sense and we are not epistemically compelled to believe it. Views that are crazy in the relevant sense include that there is no mind-independent material world ….”

    I’m not sure if this is a crazy idea, but the claim in retinoid theory that we have no mind-independent access to our physical world gets close to the notion that “there is no a mind-independent material world”. In the retinoid theory of consciousness we have to be born with a brain/mind representation of our privileged surrounding world space-time — egocentrically organized Z-planes in retinoid space. Do you think this is crazy enough to satisfy Schwitzgebel? Isn’t this consistent with the anthropic principle? Is the anthropic principle crazy? My own belief is that none of this is crazy. These ideas just fly in the face of folk notions of naive realism.

  2. 2. Gloone says:

    Great blog. I sympathise very much with what you’re saying, although I think it is mistaken. I think it is quite clear that all previous paradigm shifts were qualitatively very different from the one that you are considering. Natural selection was just an explanation of patterns in experience, and a mechanism by which they change. Other canonical paradigm shifts such as the Copernican revolution or special relativity were also of this type; they were just an explication of various aspects of observed reality.

    The problem of consciousness is totally different, because it is about the very issue of observation itself. It is intrinsically linked to problems of epistemology in which previous revolutions were totally uninvolved.

    I think if you analyse what the act of explanation is, you will find it is far less powerful than it would have to be. The domain of explanation is really very limited (essentially limited to patterns in experience), and otherwise utterly useless. Try to think of a single explanation that isn’t of the aforementioned type. As Hume said: our line is too short to fathom such immense abysses. The problem of consciousness is inherently unanswerable for beings at our level, by analysis of what the concept ‘answer’ actually designates.

  3. 3. Vicente says:

    I wonder how close is Schwitzgebel’s presentation to naive realism…

    If humans are conscious, dogs are conscious… DNA molecules are concious…. so DNA molecules have a soul…

    I think that the concept of an “epistemologically compelling” argument is not solid of robust enough, to allow the unfair use of a pejorative term as crazy.

    How come brilliant minds, like british physicist and mathematician William Kingdon Clifford, capable of producing top class logical and epistemically compelling arguments, end up being forced to create terms like “mind-stuff”. How come so many of the people that have better understood the physics of the Universe, are so humble when approaching the problem of consciousness….

    At some point of his speech Schwitzgebel makes a reference to color red that makes me think that he should study a bit more physics and natural sciences before being so arrogant.

  4. 4. Arnold Trehub says:

    Gloone wrote: “Natural selection was just an explanation of patterns in experience, and a mechanism by which they change. Other canonical paradigm shifts such as the Copernican revolution or special relativity were also of this type; they were just an explication of various aspects of observed reality….. The problem of consciousness is totally different, because it is about the very issue of observation itself. It is intrinsically linked to problems of epistemology in which previous revolutions were totally uninvolved.”

    1. It seems to me that *all* scientific explanations, including any credible explanation of consciousness, *must* be explanations of particular patterns in experience. What else could they be?

    2. I agree that the problem of consciousness is a peculiar kind of scientific problem. I believe this is so because it requires an explanation of subjectivity itself — 1st-person phenomenal content. But I do not agree that the problem of consciousness is “inherently unanswerable” in scientific terms.

    3. Gloone: “The problem of consciousness is totally different, because it is about the very issue of observation itself.”

    I disagree. The notion that to be conscious is to be an *observer* is, I believe one of the root impediments to our being able to explain consciousness. In my view, we are conscious *if and only if* we have a brain representation of *something somewhere* in a perspectival relationship to a fixed spatiotemporal locus, which I designate as the self-locus (I!). This fundamental egocentric representation in our brain is not an observation. It is a phenomenal/conscious experience. Just having this internal representation of our surrounding space ( the world around us) is our minimal subjective/conscious experience. For more about this see:

    http://evans-experientialism.freewebspace.com/trehub01.htm

    and

    http://theassc.org/documents/where_am_i_redux

  5. 5. Kar Lee says:

    Gloone [2],
    I think you have a good point. However, consciousness would not have been a “problem” if not because we could lose it, as in death. If we exist forever, there is really nothing to explain. The existence of consciousness (basically it means you exist) is really just as natural as the existence of the phenomenal world for you and there is really nothing to explain. The opposite will simply be something unimaginable for the immortals. (“Why nothing?” Isn’t it a funny question in itself?) The immortals would still seek explanations on how the phenomenal world works, so Relativity and Copernican revolution would still happen, but they would not try to explain themselves. But people can die. We all can die. And that generates a problem for us all to answer. Why me? Why now? Etc. So, an “explanation”, whatever it may be, is sought after. We have no choice but to try to understand it. But I guess what you are saying is that we won’t be able to understand it in the usual sense of “understanding.” You are probably right about that.

  6. 6. Kar Lee says:

    Eric Schwitzgebel asks in his talk “Is the United States conscious?” That is similar to the Chinese Nation thought experiment in which every Chinese in the nation is to simulate the action of one neuron, resulting in a “China Brain”. Is this China Brain conscious? I think both questions point to our misunderstanding of what consciousness is.

    Now, imagine yourself being a nation. How? Well, if you have a magic wand, and wherever you point, you create a citizen of your artificial nation, you can create a nation by pointing your magic wand everywhere you can see and create a nation with you as its mind. You can create an army out of it. If you want to interact with a “naturally formed” nation such as the US (as opposed to the artificial one like the one you just created), *nation to nation*, you will definitely find the US conscious. If you make friend with it, it will be “friendly” (not invading you, but trade with you, letting your citizen visit without a visa, etc). If you try to attack it, it will defend itself and counter attack. It seems like it can make up its mind most of the time. And this, if you are a nation, will make the US feel like a conscious entity to you. But what does it mean by you being a nation? It means you can feel your troops going out to invade people, you can feel threatened by a unfriendly state, you can feel a certain need to develop a relationship with another nation, and you can feel your economic system in disarray (the equivalence of a person feeling digestive disorder or stomach pain), all with certain “qualia” to them. So, if you are a nation, you will feel the US being conscious by projecting your own equivalent qualia to it. However, none of us who is thinking about this problem is a nation (we are just individual thinkers), not even with your magic wand, and none of us have any qualia associated with being a nation, the concept of the US being conscious makes no sense to us. We cannot project our own feelings onto a nation, and we find the concept ridiculous.

    When we talk about consciousness, we are using a model of our own consciousness. If you are not using your own consciousness as the model when you talk about consciousness, I will insist that you don’t know what you are talking about – you don’t know what you mean.

    What all these point to, is when we are talking about something as being conscious, we are talking about ourselves. We are talking about imagining ourselves being that thing. Otherwise, consciousness cannot be defined.

    For folks who insist that consciousness has its evolutionary function, they are talking about the difference between themselves in a normal state and themselves in a sleep-walking state, (if you are “conscious”, you can decide better, as compare to if you are sleepwalking). But that is a wrong analogy. A sleep-walking person is a good metaphor for getting a feel for what a phenomenal zombie might be, but it is not the the same thing. It cannot be used to mean there is any physical difference between a phenomenal zombie and a non-zombie. Physically, the two are the same. And that is the point.

    I don’t believe in the existence of phenomenal zombies because I can always project my own consciousness onto something that is reasonably similar to me. I can imagine myself being that thing. And that is my criterion and definition for consciousness. I am eager to hear other more useful definitions of consciousness.

  7. 7. Arnold Trehub says:

    Kar: “I can always project my own consciousness onto something that is reasonably similar to me. I can imagine myself being that thing. And that is my criterion and definition for consciousness. I am eager to hear other more useful definitions of consciousness.”

    But what is it that makes your consciousness *similar* to another creature’s consciousness? Since the conscious content of others will always be different in some way from your own conscious content, you must be able to say in what fundamental way the conscious content of *any* conscious creature will be like your own. My claim is that any creature is conscious *if and only if* there is a brain representation of *something somewhere* in perspectival relation to a fixed spatio-temporal coordinate of origin which I call the *core self* (I!). This is *subjectivity*, and it implies a brain representation of the volumetric space surrounding a creature — the egocentric world. So my definition of consciousness is this:

    *Consciousness is a transparent brain representation of the world from a privileged egocentric perspective*

    I think this is a very useful definition because it implicitly challenges us to find the kind of brain mechanism that can generate an internal representation of our surrounding world space and its content from a privileged egocentric perspective. My theoretical solution is the retinoid model of consciousness.

  8. 8. Richard J R Miles says:

    Crazyism has reminded me that a few years ago I arrived at a crazy hypothesis by an aha/eureka! moment, not religious I hasten to add. However, trying to describe in two dimensional words or a diagram of what is a continuous variation of four dimensions with different effects and affects is my problem, let alone trying to prove or disprove it. I would be the first to admit I have written a rambling mess, called ‘My Philosophy of Psychology’ at:- http://www.perhapspeace.co.uk However this was how I originally managed to piece it together on paper. It describes the basis which I have not found a way to significantly improve, even though I know a lot more now than I did then. I could remove some of the ramblings and write a more concise version, which might be easier to read or maybe a video but hey ho. The variations of consciousnesses are beyond the interest of consideration and comprehension of most people, who will not even want to go there for various reasons which I understand, including the religious. However, eventually it will make sense to someone hopefully while I am alive. I have tried to refer to it before on this site but it only resulted in people bizzarely discussing the American Constitution.
    It can be read at I will understand if you find it too tiresome or crazy, which I obviously do not hmmm…Maybe I should go and see someone.

  9. 9. Vicente says:

    Arnold, that the brain has mechanisms to code (represent) some aspects of the environment, could be a precondition for consciousness, but is not sufficient. Even more, could be a precondition for consciousness, related to conscious living beings in this planet, but not a general condition for consciousness. I anticipate your question, and no, and don’t know any kind of consciousness that is not related to conscious living beings, but I can have a sort of intuition of what could be…

    Keeping it easy (if there’s anything easy), represention is the first step, then you would need all the re-entry flows in the talamo-cortical & cortical-cortical networks that provide the relational synchronised exchange that allows the representation to “make sense”.

    That’s the point, how does “the representation” make sense to the *core self*? for any purpose..

    So, don’t you think that your definition needs to be extended… unless you can accept perception without observation, or perception without experience, which is the intellectual processing of the perception, up to some extent, depending on the intellectual power of the observer.

  10. 10. Arnold Trehub says:

    Vicente, as I see it, observation of an event consists in detecting, discriminating, recognizing, and classifying the event. Synaptic matrices in each of our sensory modalities perform all of our our observations pre-consciously and project their outputs back into retinoid space as part of our stream of conscious experience. Making sense or assigning meaning to an event is is accomplished in our semantic networks — again, pre-consciously. For a diagrammatic representation of these processes see *The Cognitive Brain*, Ch. 16, Fig. 16.1, here:

    http://people.umass.edu/trehub/thecognitivebrain/chapter16.pdf

    Neuronal specifications for the cognitive mechanisms labeled in the Fig. 16.1 diagram are given in the earlier chapters of the book.

    Patterns of subvocal output driven by preconscious semantic operations can be projected into our egocentric retinoid space where they are experienced as inner speech telling us what we already know pre-consciously. For example, when you read these words on the computer screen you know what they mean *before* you consciously experience the phrases as inner speech that makes sense to you.

  11. 11. Vicente says:

    Arnold,

    “For example, when you read these words on the computer screen you know what they mean *before* you consciously experience the phrases as inner speech that makes sense to you.”

    Your statement is interesting and supports the idea that consciousness is epiphenomenal, at least to some extent, but we are talking about consciousness, not blindsight, or unconscious behaviour.

    The key term is “you know”… I disagree, at least I don’t consciously know.

    An old inhibited memory about a past traumatic event could be interfering my behaviour, but unless I consciously recall the event, I “don’t know” about it, despite the information of the event is somehow coded and used by my brain… We are talking about consciousness, not subconsciousness or unconsciousness…

    Think of your statement:

    – I “know what it means” before… but

    – the inner speech “makes sense” to me later…

    does not look coherent to me, even just from a semantic or logical point of view. And what: “I” and “to me”…

    You need and additional variable in this equation.

  12. 12. Arnold Trehub says:

    Vicente: “Your [Arnold] statement is interesting and supports the idea that consciousness is epiphenomenal, at least to some extent, but we are talking about consciousness, not blindsight, or unconscious behaviour … The key term is “you know”… I disagree, at least I don’t consciously know.”

    The point is that you don’t consciously know before you pre-consciously know. We are talking about *meaning* in the context of cognitive brain processes. If you look at plausible brain mechanisms, as I have, you will find that while the retinoid system gives us our personal phenomenal world, the parsing, sensing, and recognition, of objects and events in our phenomenal world, as well as the production of imagistic associations and logical implications which give us the *meaning* of our experience, are constructed within our preconscious cognitive mechanisms (synaptic matrices). It is only *after* the axonal outputs of these preconscious cognitive mechanisms are projected into our subjective retinoid space that we are aware of them as inner speech and conscious imagery. When you type your response to these comments, the character strings that you decide to type will all be worked out in your brain as preconscious neuronal symbols/tokens and motor routines before you become aware of them as meaningful sentences. If you had to *consciously* think about how each word should be chosen and all the words should be assembled to make a meaningful sentence before you talk or type you would be cognitively impaired.

    However, consciousness is *not* epiphenomenal because consciousness properly binds all sensory events in a coherent egocentric spatio-temporal plenum to give us our total subjective experience — our phenomenal world to which we must adaptively respond in order to thrive in the real world.

  13. 13. Vicente says:

    Arnold,

    The point is that you don’t consciously know before you pre-consciously know. We are talking about *meaning* in the context of cognitive brain processes

    I am not denying that a lot of processing is required before mental events reach the conscious layer. I am saying that it is only then, that you can appeal to the concept of “to know”. To me, there is no *meaning* in mere neuronal processing before it becomes a conscious process. What happens then ?? how is the phenomenal theatre built??

    What you refer to is very much Baars GWS with Dennett’s processes pandemonium competing to reach consciousness.

    When I said epiphenomenal I was thinking of the “free will” problem. In your model, “*after* the axonal outputs of these preconscious cognitive mechanisms are projected into our subjective retinoid space”, no decision making agency is derived.

    Actually, your model could manage without the phenomenal side, and could fit Philosophical Zombies nature.

    I am inclined to think that Peter might be right suggesting that we should look in other directions too, and the piece that completes the puzzle could come from ontology, rather than neuroscience. Neurology in the way it progresses today, can reasonably do without phenomenology. After all pleasure, emotions and mind elation resulting from listening to music are just the results of the activation of certain audio-cortex areas…

    So, neuroscience can always eventually identify mechanisms responsible (correlating) for phenomenal states… but it tells NOTHING about what those phenomenal states ARE…hmmm… ontology then.

  14. 14. Arnold Trehub says:

    Vicente: “To me, there is no *meaning* in mere neuronal processing before it becomes a conscious process.”

    If you look at all the evidence, I think you will find that meaning must be constructed *before* one is consciously aware of the meaning. Just as we make decisions *before* we are consciously aware of making a decision (~ 500 ms preconscious to conscious).

    V: ” What happens then ?? how is the phenomenal theatre built??”

    For an account of how the phenomenal theater of retinoid space is built see *The Cognitive Brain* (TCB), Ch. 7, “Analysis and Representation of Object relations”, here:

    http://people.umass.edu/trehub/thecognitivebrain/chapter7.pdf

    V: “What you refer to is very much Baars GWS with Dennett’s processes pandemonium competing to reach consciousness.”

    Baar’s GWS cannot explain consciousness because it cannot account for subjectivity, whereas the retinoid system does account for subjectivity. If you believe that Baar’s GWS is like the retinoid mechanism then show us how GWS can resolve a Julesz random-dot-stereogram as the retinoid mechanism does. See pp. 74 – 76, here:

    http://people.umass.edu/trehub/thecognitivebrain/chapter4.pdf

    Also, Dennett’s/Selfridge’s pandemonium mechanism cannot learn about a complex environment like my cognitive system does here:

    http://www.people.umass.edu/trehub/thecognitivebrain/chapter12.pdf

    V: “When I said epiphenomenal I was thinking of the “free will” problem.”

    Understanding the “free will” problem requires a thread of its own.

    V: “Actually, your model could manage without the phenomenal side, and could fit Philosophical Zombies nature.”

    Absolutely not! Without a representation of the phenomenal world in our retinoid space there would be no coherent biological substrate from which to extract relevant stimuli for learning about the constraints and affordances of the world we live in.

    V: “I am inclined to think that Peter might be right suggesting that we should look in other directions too, and the piece that completes the puzzle could come from ontology, rather than neuroscience.”

    I think that dual-aspect monism is the proper metaphysical framework in which to understand consciousness as a neuro-biological phenomenon. See “Understanding Consciousness” here:

    http://theassc.org/files/assc/Understanding_C.pdf

  15. 15. Vicente says:

    Arnold,

    Absolutely not! Without a representation of the phenomenal world in our retinoid space

    This is point, the phenomenal is not represented or projected anywhere, it is the representation itself !! with you in it.

    The phenomenal world is constructed (invented) by the brain, using sensorial data plus other subjective ingredients.

    Your model can perfectly run without such a representation/creation. The neurological processes in the final retro-projection into the retinoid space need no phenomenal reality in order to produce the same behaviour.

    Regardless the difficulty, you could write the physico-chemical equations of the brain (and boundary conditions) as a system, and you won’t have one single variable or parameter accounting for any phenomenal representation characteristic. However, the solutions of such dynamical system should represent human behaviour… an android equiped with such computing machinery should look human.

    When you say “metaphysical” framework, what does “meta” mean to you?

    Arnold, in biological terms, and considering we have evolve under conditions in which a waste of energy (scarce) could be letal… why are we discussing these issues?

  16. 16. Arnold Trehub says:

    Vicente: “This is point, the phenomenal is not represented or projected anywhere, it is the representation itself !! with you in it.”

    You must be referring to a model of consciousness that is different than the retinoid model that I have proposed because I explicitly describe feed-forward feed-back loops between excitation patterns in the phenomenal world of retinoid space and the brain’s non-conscious cognitive mechanisms. For example, see “Space, self, and the theater of consciousness”; in particular, Figs. 7 and 8, here:

    http://people.umass.edu/trehub/YCCOG828%20copy.pdf

  17. 17. Vicente says:

    Arnold, now I see the source of miscommunication.

    excitation patterns in the phenomenal world

    To me, in the phenomenal world there are no excitation patterns by definition, there are only qualia.

    The feed forward/back loops would account then for the necessary reentry, and could be a solution for the binding problem.

    Then, all you can do is to try to identify NCCs, between those excitation patterns and the phenomenal states, like you did in your brilliant SMTT experiment.

    I don’t buy dual aspect monism, therefore I cannot equate phenomenal and neurological states.

    Having said this, nowadays I am attracted to some sort of pan-psychism combined with fleeting buddhist aggregates and Clifford’s mind-stuff idea… in that frame the retinoid geometrical infrastructure fits. I have to admit that they have some right to appeal to crazism… excuse me for this last subjective seizure.

  18. 18. Kar Lee says:

    Arnold, I am enjoying your debate with Vicente. I admit that my definition of consciousness is very vague, bordering useless. However, it is exactly what I mean when I consider something conscious. Your definition of consciousness on the first glance looks clearer than mine, and potentially more useful. However, when I think about it more, it appears to be more of an assertion than a definition, and I hope the assertion can coincide with the way you use the term consciousness.

    So, just for the sake of argument, let’s say, if it is somehow discovered that there is some conscious animal that does not have “a transparent brain representation of the world from a privileged egocentric perspective”, what would you say to such a discovery? (If I were you, I would probably track down the discoverer and ask him what he meant by “conscious animal”)

    Now, let’s take the definition you put forward, and apply it on an iRobot’s Roomba vacuum cleaner. It has an internal map of the surrounding in which it operates. It “knows” where it has cleaned and where it has not. It has a “transparent” representation of the world from a “privileged” egocentric perspective (how else do you think it knows where the rest of the carpet is that it needs to clean?). Transparent or not, privileged or not, I don’t think we can even know how to confirm or deny. But it has most of the elements in your definition, would you believe the vacuum cleaner is conscious or not? I bought one few years ago, and it seemed to know what it was doing. And it did behave as if it had a purpose (agency). But is it conscious? I would claim it is, according to your definition. Any thoughts?

  19. 19. Arnold Trehub says:

    Kar: “Now, let’s take the definition you [Arnold] put forward, and apply it on an iRobot’s Roomba vacuum cleaner. It has an internal map of the surrounding in which it operates. It “knows” where it has cleaned and where it has not. It has a “transparent” representation of the world from a “privileged” egocentric perspective (how else do you think it knows where the rest of the carpet is that it needs to clean?).

    This is a common misunderstanding. The iRobot-Roomba might have an internal look-up table for general commands and a set of motor commands linked to the closing of selected obstacle-contact switches, but it certainly does *not* have a global representation of its surround (its world) from its own perspective. Roomba doesn’t “know” that it is cleaning; it is only designed to move in certain patterns depending on which contact-sensors are activated. Thats all that is needed to make it useful. It has no analog representation of the volumetric space it moves in. While it cleans the room, Roomba is as far from being a conscious thing as is a paramecium reflexively recoiling from a contact in its microscopic environment.

  20. 20. Kar Lee says:

    Arnold,
    Since I am not privileged to the proprietary information of iRobot, I cannot claim with 100% certainty that Roomba has an internal map of its surrounding. However, if I were to design Roomba, I would definitely design that in: A dynamically generated map of the area it is working in. Why not? And I bet it is very likely that it is implemented this way. Even if it is not, it is very easily done. Vector representations of geometrical objects are common in CAD programs. A dynamically generated representation of the surrounding for a robot is a certainty.

    You emphasized “*analog* representation of the volumetric space it moves in.” I don’t see why you need to focus on the representation being analog. Digital to a high precision is just as good. After all, the world is ruled by quantum mechanics, and so it is fundamentally discrete anyway (even the space-time fabric at the Plank scale is mostly likely discrete). So, I don’t why one needs to insist that the representation being analog.

    So, again for the sake of argument, given the fact that you can design a robot with a dynamically generated internal digital representation of the surrounding that it is in, why can’t it be conscious, according to your definition?

  21. 21. Vicente says:

    Arnold,

    Roomba doesn’t “know” that it is cleaning

    That’s it. Even more, Roomba cannot manage concepts.

    What is there in the human brain circuitry that enables it to manage concepts?

    1) Cleaning ->

    2) Removing dirt -> [dirt definition needed]

    3) making environment nicer and preventing infections ->

    4) increased survival probabilities & avoiding disgusting odors (evolutionary programming)

    Are we roomba cleaners programmed by evolution? Could be, cleaning is an advantageous behaviour for the species. Other animals show it.

    As long as we remain stuck in this practical level we won’t make any progress. Consciousness allows to pack a complex practical behaviour into and abstract general class concept as” cleaning”, that integrates simpler concepts as: dirt, removing, derived benefits, etc.

    Arnold, it doesn’t matter if you have a priviledged representation of your surrounding world, if you don’t know that world exists…

    Consciousness has to do with knowing that that representation exits, otherwise you are a Roomba machine (with a better chip).

    What makes the difference between the Roomba processor and a brain neural networks when cleaning? Maybe, if the human cleaner is deeply concentrated on the task, none (except for the phenomenal representation).

    So the only differences between the human cleaner and Roomba, are qualia and the knowledge about the action, at least at some points in time. Eventually we hit onto qualia and meaning, as always.

    Again, here we are with these slippery concepts in our hands, “to know” “meaning” “intentionality/aboutness”….
    ________________________________________________________________

    PS: One of the very few ways for “salvation” is to make conscious many of our instinctive programmed behaviours and reactions. Or do you all want to be Roomba cleaners (Kar Lee’s practical examples as handy as always).

  22. 22. Arnold Trehub says:

    Kar: “Vector representations of geometrical objects are common in CAD programs. A dynamically generated representation of the surrounding for a robot is a certainty….. I don’t see why you need to focus on the representation being analog. Digital to a high precision is just as good. After all, the world is ruled by quantum mechanics, and so it is fundamentally discrete anyway (even the space-time fabric at the Plank scale is mostly likely discrete). So, I don’t [know] why one needs to insist that the representation being analog.”

    Vector descriptions of geometrical space and geometrical objects (e.g., CAD) are simply abstract mathematical structures that, *as such* do not represent anything about the physical world. They must be *interpreted* in a digital computer as commands to construct analog representations of the objects they formally describe. It is only the representation on the computer screen or on a drawing board that embodies spatial features analogous to those in the physical world. Moreover, the computer’s screen image is not a volumetric space, and it has no fixed perspectival (egocentric) origin.

    Recall my definition of consciousness:

    *Consciousness is a transparent brain representation of the world from a privileged egocentric perspective*

    So:

    1. A CAD/digital-generated screen representation has no *egocentric* perspective.

    2. The brain is not a digital computer.

    3. The operative cognitive mechanisms of the brain do not function at the scale of quantum mechanics.

    K: “So, again for the sake of argument, given the fact that you can design a robot with a dynamically generated internal digital representation of the surrounding that it is in, why can’t it be conscious, according to your definition?”

    A robot that operates according to CAD/digital instructions does *not* have a representation of its volumetric surround. It only has a digital description of a selected physical space/object. As far as I have been able to determine — and I’ve asked many knowledgeable people — there is no known artifact that contains a representation of the volumetric space in which it exists from a fixed perspectival origin.

  23. 23. Arnold Trehub says:

    Vicente: “What is there in the human brain circuitry that enables it to manage concepts?”

    The neuronal structure and dynamics of the semantic networks that I have proposed can manage concepts. See *The Cognitive Brain*, Ch. 6, “Building a Semantic Network”, and Ch. 13, “Narrative Comprehension and Other Aspects of the Semantic Network”, here:

    http://people.umass.edu/trehub/thecognitivebrain/chapter6.pdf

    and here:

    http://people.umass.edu/trehub/thecognitivebrain/chapter13.pdf

    V: “Arnold, it doesn’t matter if you have a priviledged representation of your surrounding world, if you don’t know that world exists.”

    Our immediate conscious experience of the world *must* precede any concept/knowledge *about* the world; e.g., having any concept about the *existence* of the world. So it certainly matters that you have a representation of your surrounding world.

    V: “So the only differences between the human cleaner and Roomba, are qualia and the knowledge about the action, at least at some points in time. Eventually we hit onto qualia and meaning, as always.”

    Yes, but I have proposed that qualia are just the phenomenal features of consciousness — selected patterns of autaptic-cell activation in retinoid space. As far as our *knowledge* goes, at any given time, there is a vast amount of knowledge stored in our brain that we are not conscious of, but which governs our understanding and action from moment to moment. Salient parts of this preconscious store of knowledge are neuronally projected into our subjective retinoid space where they contribute to the stream of conscious content as inner speech and imagery. We are nothing like Roomba!

  24. 24. Kar Lee says:

    Arnold,
    1. “A CAD/digital-generated screen representation has no *egocentric* perspective.”
    But a robot that is cleaning the carpet has. Otherwise, how does it know where it is in the room and where else it still needs to clean? It must be keeping track of the locations that it has cleaned. Otherwise it will be going back to the same place to clean over and over again. It has to have an egocentric perspective.

    I happen to agree with a lot of what you are saying, including the brain is not a digital computer, but it does not reconcile well with your definition of consciousness, and that’s what I wanted to point out.

    I don’t follow your argument regarding whether vector representations of spatial objects are representations or not. You seem to be saying that they need interpretations. But from computational point of view, it is just a transformation operation. Whether the representation is in stored in ACCII, floating point, in bitmap, or any other formats, as long as the robot can interpret the information and make use of it in its navigation of its immediate surrounding, I don’t see why it is not an egocentric representation.

    You said,”A robot that operates according to CAD/digital instructions does *not* have a representation of its volumetric surround. It only has a digital description of a selected physical space/object.”

    But a digital description is a representation. Volumetric or not, it is really irrelevant because the floor is 2D and the environment is 2D. I am sure a robot can be built to accommodate 3D data, just not in Roomba. In fact, the dynamically generated map can be extendable. It can keep expanding until the physical memory space runs out. You will probably exhaust your own memory if you are put inside a complicated maze too, won’t you? The robot could even have a built-in mechanism to selectively “forget” things so as to make space for more information. So, I don’t see why this robot cannot have a representation of its *volumetric surround*. So, why isn’t it conscious?

  25. 25. Vicente says:

    I agree with Kar Lee, at the end of the day, whether there is 3D to 3D mapping coding, or not, should not be crucial for consciousness… Could be for vision.

    I was reading about the Karolinska Institute experiments, in which using the right illusory apparatus, they make your brain believe that it is transported to another “host”, for example a barbie doll, and I thought that the proyection on the Z-planes could explain this effect. It is basically a distorsion of the natural I! locus and the predefined scale to measure surrounding space.

    And then we should solve Peter’s concern about handling concepts not involving space (eg: numbers?) in which case the retinoid space is not necessary..

  26. 26. Arnold Trehub says:

    Vicente and Kar,

    As I see it, a digital coding of any space is not a transparent *representation* of its surrounding world from an egocentric perspective because a digital code does not exhibit the perspectival spatiotopic features of its physical surround. So a digital device does not meet my definition of consciousness.

    You, of course, are free to offer a different definition of consciousness, one in which a digital code is arbitrarily accepted as a transparent representation of an egocentric surround. Under this new definition it would be a straightforward matter to design a robot that you would say is conscious. But suppose you did it this way. Since human brains do not function as digital computers, how would you relate robot consciousness to human consciousness? Would they really have anything in common with us? If not, how comfortable would you be with the claim that consciousness is explained by a digital code? Would it be like something that you experience? I can’t imagine how it would be like anything that I experience.

    Kar: But a robot that is cleaning the carpet has [an egocentric perspective]. Otherwise, how does it know where it is in the room and where else it still needs to clean? It must be keeping track of the locations that it has cleaned. Otherwise it will be going back to the same place to clean over and over again. It has to have an egocentric perspective.”

    The robot does *not* have to know where it is in the room. All it has to do is follow a simple digitally coded routine. For example, in a rectangular room:

    1. The owner starts the vacuum robot (Roomba?) at one corner of the room (say lower right corner).

    Roomba vacuuming code:

    2. Move straight ahead until front contact switch is closed.
    3. Move to the left by the distance of the vacuum nozzle.
    4. Reverse direction of travel.
    5. Move straight ahead until front contact switch is closed.
    6. Move to the left by the distance of the vacuum nozzle.
    7. Reverse direction of travel.
    8. Repeat steps 2 – 7 until a side contact switch is closed.

    Floor vacuuming completed and Roomba had no information about where it was in the room while doing its job.

    Vicente: “I was reading about the Karolinska Institute experiments, in which using the right illusory apparatus, they make your brain believe that it is transported to another “host”, for example a barbie doll, and I thought that the proyection on the Z-planes could explain this effect.”

    You are right, Vicente. See this paper in press in JCS:

    http://theassc.org/files/assc/Where%20Am%20I.pdf

    V: “And then we should solve Peter’s concern about handling concepts not involving space (eg: numbers?) in which case the retinoid space is not necessary..”

    Number concepts can be handled in the preconscious semantic networks. However, if we are consciously aware of handling numbers, then we do need number-related inner speech and imagery projected into our retinoid space.

  27. 27. Arnold Trehub says:

    Correction to my comment #26: Step 8:

    8. Repeat steps 2 – 7 until a side contact switch *and front contact switch* are closed.

  28. 28. Kar Lee says:

    Arnold, no need to correct your algorithm. We know what you mean. :)
    But don’t forget, Rooomba knows where the docking station is. It can go back to recharge.

    Let’s dig deeper into your statement: “a digital coding of any space is not a transparent *representation* of its surrounding world from an egocentric perspective because a digital code does not exhibit the perspectival spatiotopic features of its physical surround.”

    I don’t think I understand the logic at all. “because a digital code does not exhibit the perspectival spatiotopic features of its physical surround”. Can you explain a little bit more? If a digital code does not, I don’t think any analog code does either. I maintain that any analog code can be represented by a digital code to any arbitrary precision. Any information on an analog record can be reproduced by a digital CD, to any arbitrary precision.

  29. 29. Arnold Trehub says:

    Kar: “I maintain that any analog code can be represented by a digital code to any arbitrary precision. Any information on an analog record can be reproduced by a digital CD, to any arbitrary precision.”

    I think this is where we are talking past each other. The retinoid system is not an analog code. It is a *neuronal mechanism* in a brain. The fact that its function can be *simulated* in some way by a digital code or described by information on an analog record does not make such codes or records competent to perform what the neuronal mechanism is able to perform. A digital simulation of a storm is not a storm. My definition of consciousness explicitly speaks of a “transparent brain representation of the world”.

    K: “But don’t forget, Rooomba knows where the docking station is. It can go back to recharge.”

    In my past life I designed electromechanical devices much more sophisticated than Roomba. It is a relatively simple matter to design a robot that can home in on its recharging station. Such a robot is not conscious according to my definition of consciousness.

  30. 30. Vicente says:

    Arnold

    “The retinoid system is not an analog code. It is a *neuronal mechanism* in a brain”

    This is what I find so difficult to understand. There must be some kind of representation code. Even in a drawing or printed picture there is a code, concentration of pigments?

    When chamaleons or some insects mimetize in the environment, they do what you say, there are physiological processes in their skins that represent salient features of the surrounding world. Green leaves induce green skin (I said green but I could have said optical reflective properties for certain wavelength).

    I believe you are saying that the brain does almost the same. Which I don’t see. The brain has a represenation of the surrounding world using some kind of coding (not digital of course). Activation patterns are completely separated from the world, except for some geometrical ressemblence in very simple cases. See for example the paper referred in next post about hearing, they could track speech elements (as sylable coding) long way into the processing chain, but the excitation pattern for a sylable is just that, firing rates, that to some extent and cases, look like the input air pressure waves. Like retina excitation patterns look like the image perceive, and those characteristics are preserved to some extent into the V1 and V2 areas.

    I agree with you that a simulation of the retinoid system will not reproduce the output.

    The issue remains open to me. I don’t know what a phenomenal image is and how it is produced.

    I, of course, acknowledge all the underlying brain activity. How are both sides, phenomenal and neuronal, binded? This is the question.

  31. 31. Kar Lee says:

    Arnold,
    I have one more point to make, in addition to what Vicente has commented about the retinoid system not being an analog code.

    You said, “A digital simulation of a storm is not a storm.” You are right about that, but a simulation of consciousness *is* consciousness (provided that it indeed can be simulated). Here is the logical argument:

    Let me simulate the analog mechanism in your brain to any arbitrary precision with a digital computer. Every time a neuron fires in your brain, a corresponding process happens in my simulation program. So, there is a machine out there in the world that has identical outputs as your brain, given the same inputs. Now I am going to feed those output signals into a biological body which I will call Arnold’. So, behaviorally, Arnold’ and Arnold are identical. Now, for an outsider, not being Arnold himself like you are, I will have no way of knowing which one is Arnold and which one is Arnold’. For all I can tell, and for anyone can tell, both are conscious (unless you insist one being a zombie).

    The caveat is that all the relevant processes can be simulated, which is the assumption in this argument. This assumption is consistent with what you have claimed so far. So I will have to conclude that as far as consciousness goes, the simulation is a real thing. What objections can you find in this argument other than the obvious ones like “you can’t practically feed signals into another identical biological body minus the brain” ?

  32. 32. Arnold Trehub says:

    Vicente:

    “The brain has a represenation of the surrounding world using some kind of coding (not digital of course).”

    Your direct conscious experience, as such, is *not* expressed as a code in your brain. It is simply the phenomenal world that you experience in your extended present. My claim is that the spatio-temporal stream of autaptic-cell activation patterns in retinoid space *IS* one’s phenomenal world/consciousness. If conscious content were only a code then it would have to be *decoded* to realize our conscious content, but then this content would have to be decoded, and so forth, leading to an infinite regress.

    V: “The issue remains open to me. I don’t know what a phenomenal image is and how it is produced.”

    My theoretical claim is that phenomenal images are autaptic-cell activation patterns in egocentric retinoid space, and that they are produced by the neuronal mechanisms that I have specified in my theoretical model of the retinoid system and the other brain mechanisms that I have detailed in *The Cognitive Brain* (MIT Press 1991).

    V: “How are both sides, phenomenal and neuronal, binded? This is the question.”

    This is why I have proposed my bridging principle for the scientific exploration of consciousness:

    *For any instance of conscious content there is a corresponding analog in the biophysical state of the brain*

    So the scientific problem is to find the brain mechanisms that can generate proper analogs of conscious content. My SMTT experiment shows that the neuronal structure and dynamics of the retinoid system can generate proper analogs of conscious content.

  33. 33. Arnold Trehub says:

    Kar: “Let me simulate the analog mechanism in your brain to any arbitrary precision with a digital computer. Every time a neuron fires in your brain, a corresponding process happens in my simulation program. So, there is a machine out there in the world that has identical outputs as your brain, given the same inputs.”

    This is not true unless your simulation program is run in a living neuronal machine identical to my brain; e.g., parallel short-term memory, local field potentials, global spatiotopic structure, ionic fluxes, etc.

  34. 34. mark drago says:

    I wonder though: are the “physical” and “mental” indeed “two separate realms”?

  35. 35. Kar Lee says:

    Arnold,
    So, you are claiming that there are brain processes that cannot be simulated in a digital computer, not that you can’t do it on practicality ground, but on principle ground?

  36. 36. Vicente says:

    Arnold and Kar, in this case the problem is a bit more fuzzy than in ordinary simulation techniques.

    Usually, to asses the efficiency of a simulation you have to parameters: fidelity and accuracy.

    In summary, what you have to do is to compare real and simulated input/output series (or cases), and see how it goes.

    In our system, the input and boundary conditions could be defined, I supose: a set of initial parameters for the brain state, and a predefined stimulus.

    But what would be the output? observed behaviour? on a statistical basis? due to brain signaling noise (chemical, thermal, quantum), you would have to average many simulation runs.

    What about the phenomenal output, no way to check it… would behaviour be enough?

    Regardless the practical difficulties, I am not so sure how simulation can be used for brain research at system level… from a physical/functional point of view the brain seems to be a strong chaotical system, very far from equilibrium (not in mean global homeostatical terms of course). I don’t believe a simulation is going to shed much light on the problem.

    Mark, what do you feel about it?

  37. 37. Kar Lee says:

    Vicente,
    I think you pointed out some good points. Chaotic system is probably quite difficult to simulate. However, I am hopeful that the outputs are still very useful and realistic. Take for example, weather prediction. After a week, the simulation result starts to diverge from the real system. However, if we let the simulation continue, the result, though not exactly the same as the reality, is still realistic: for

    1) The real weather could have evolved that way too if not because of some small perturbation (signature of a chaotic system) that caused the two to diverge from each other.
    2) No one look at the simulation result and can tell that it is not a real system, but “merely” a simulation.

    I am thinking along the same line in terms of simulating a brain. Currently we can simulate chemical reactions: Monte Carlo, Molecular Dynamics, etc, and they are being used in drug development. So, to my mind, all processes in the brain, as long as they are electromechanical in nature, can be simulated. The outputs are the signals that go out to the muscles: eyeballs, mouth, fists, legs…that control the behavior of an individual (I am taking a strictly behaviorist approach here). Aside from the fact that computer may not have the speed to simulate all these in real time, nothing is ill-defined here. We can just assume that the computer is fast enough for the sake of argument here.

    So, given an Arnold, and a simulated Arnold’, can anyone tell that one is conscious and the other is not? I would not think so, because we can only judge each other by their behaviors. So, why is being a neuronal machine that important?

    Arnold, do you see my point?

  38. 38. Vicente says:

    Kar Lee, it is a bit more complicated. Chaotic and noisy systems can be simulated, as long as you can write down the dynamic equations. What I meant is that even two real copies of the same person could show different behaviours under the same initial conditions (internal and external), because of the effect of the noise in the evolution of the chaotic system. That is why a statistical approach is required, Arnold and Arnold’ should show equivalent behaviours on average, which I don’t know if it really makes sense.

    Then, whether you compare discrete behavioral events (reactions), or continous behavior during a defined time period, could also make a big difference in the final conclusions. Even the duration of the time period could be a critical parameter…

  39. 39. Kar Lee says:

    Vicente,
    I agree. Even Arnold himself could decide differently if he is to repeat a decision because of noise (choosing vanilla ice cream over frozen yogurt, for example.) So, we can relax the condition a little bit.

    For people who are familiar with Arnold, they will not confuse him with someone else in the same department. If Arnold’ can get everyone who is familiar with Arnold to think that he is Arnold, the simulation program works. That is what is meant by equivalent behaviors on average. When that happens, Arnold’ should be considered conscious, despite the fact that it is not a neuronal machine. Make sense?

  40. 40. Arnold Trehub says:

    Kar: “So, why is being a neuronal machine that important?
    Arnold, do you see my point?”

    I see your point, but the only empirical evidence that we have to support my definition of consciousness is obtained from living persons (neuronal machines). So you would have to show that a *digital/propositional simulation* of the essential neuronal machine in a person is the same as the real neuronal machinery in the person. I don’t see how this is possible.

  41. 41. Vicente says:

    Ah, Kar Lee, appealing to Turing argument, hmmm. So the simulation works if it passes an ad hoc Turing test. OK, could be, but it would have to be a really exhaustive one. And still, I am not so sure about the conscious component.

    I doubt that this will ever be implemented, no matter how powerful quantum computers will be in the long term. Look at blue brain, it seems a good candidate to become the flop of the millenium… who knows.

  42. 42. Kar Lee says:

    Arnold,
    “So you would have to show that a *digital/propositional simulation* of the essential neuronal machine in a person is the same as the real neuronal machinery in the person. I don’t see how this is possible.”

    I think it comes down to this: everything that can be quantified and governed by rules can be simulated. If chemical reactions can be simulated (check out a physics simulation tool called Comsol, they have a chemical reaction package), brain processes can be simulated. Taking a physicalist’s stand, the burden of proof of otherwise seems to fall on your side. :)

  43. 43. Kar Lee says:

    Vicente,
    It is not really the Turing argument. It is only one way to describe what we mean by “equivalent behaviors on average”. We will have the same argument even if we are comparing two *identical* neuronal brains. One brain may make one choice and the other make a different choice, as you pointed out, all just because of noise and that the brains are operating at the edge of chaos. Despite the occasional differences, the two brains will produce what everybody else will consider “the same person”. And the “everybody else test” is not really a Turing test in the real sense of the Turing test.

  44. 44. Arnold Trehub says:

    Kar, a simulation tells you what might be expected to happen in the physical world. It does not create the expected happening in the physical world. Thats why the predictive accuracy of simulation models are tested against real events. Simulations can have practical scientific value, but it seems to me that the power that you attribute to a simulation suggests an invocation of magic.

  45. 45. Vicente says:

    Arnold, it is true that the simulation does not create the expected happening in the physical world, but it provides the “value” for the expected happening in hte physical world. Actually, what you compare is the expected value provided by the simulation, and the measured value of the happening, two values. In our case, the problem is how to define those values. Probably Kar Lee is right and a subjective method is required. Well, after all subjectivity is a main feature of consciousness.

    Kar Lee, yes, you are right it was just the similarity between both concepts, based on the subjective perception of an individual.

    Anyway, to me, even if the simulations were pretty good, I have no guarantee that they would be conscious, for the same reason that an oven simulation can’t cook biscuits. In that point, I am close to Arnold, and I believe that life, and real biophysical processes in the brain play a crucial role, in a way not yet understood.

    Nevertheless, it could be that consciousness is intimately related to certain information management processes, and any device that manages information in a neuronal fashion becomes conscious, but that would not be simulation, would be bionics.

    When you look at a brain on a dissection bench, and you know a bit about its histology and physiology, consciousness and qualia are really difficult to fit in it, on a pure physical ground. Probably that’s why crazyism is part of the game, when you look at that greyish chunk, you end up dualist or dual aspect monist, or simply astonished for ever.

  46. 46. Kar Lee says:

    Vicente,
    I am actually more hopeful that we can more objectively determine the degree of similarity between two individuals and determine if they are the same person. Remember when Joe Klein (famous in US, not sure if you guys know about him in Europe) published a novel “Primary Colors” anonymously, a frenzy to identity who the author was on. Finally, just based on the writing style and use of words, a professor Donald Foster who has done a lot of literary analysis, identified Joe Klein as the author. Somehow, I think many traits of a human being can be quantified. If we can do that, we can make the process a lot less subjective.

    Arnold,
    At this point, I am quite sure you are a believer of John Searle’s Chinese Room argument. Guess what? Me too. What I have been trying to do is to use your logic and your own definition to see where it leads us. In consciousness, the simulation is the real thing because you do have real output signals that you can use, exactly like the output signals of a neuronal brain that can be fed to the rest of the body. You can look at the neuronal brain as an ultimate simulator. Nothing that a brain can do, that a simulator cannot do, if the brain can indeed be simulated. Do you have better argument on why some brain process cannot be simulated?

  47. 47. Vicente says:

    Ah, “writing style and use of words”, now you come back to my statistical analysis. Very bad, you are beginning to show a pendular and erratic rationale difficult to follow… just kidding, actually subjective assesment is probably tautologically based on subjective statistics… Still the consciousness presence remains an issue.

    Do you have better argument on why some brain process cannot be simulated?

    jumping Arnold’s turn (sorry): because it is not susceptible to be mathematically modeled. That would be a good use for a simulation, if you create a really good mathematical model of the brain and it significantly fails. It might indicate that there is something beyond physics in the brain case.

  48. 48. Arnold Trehub says:

    Vicente,

    Taking account of the weight of evidence I think dual-aspect monism makes the most sense.

    Kar,

    Yes, I agree with Searle’s Chinese Room argument.

    Kar: “In consciousness, the simulation is the real thing because you do have real output signals that you can use, exactly like the output signals of a neuronal brain that can be fed to the rest of the body.”

    The point is that consciousness is a transparent representation of the world from an egocentric perspective (subjectivity). The fact that this subjective representation can serve as a source of signals that can be fed to the rest of the body is critical for adaptation/survival, but it is not essential for the existence of consciousness. Consider locked-in patients; e.g., Bauby “The Diving Bell and the Butterfly”.

    K: “Nothing that a brain can do, that a simulator cannot do, if the brain can indeed be simulated. Do you have better argument on why some brain process cannot be simulated?”

    I don’t claim that brain processes cannot be simulated. What I claim is that a digital/propositional simulation would lack the essential property of having the intrinsic perspectival origin (I!) needed for subjectivity. On the question of an analog artifact having a global perspectival representation of the space in which it exists — the structural and dynamic properties of retinoid space — all I can say is that I know of no such artifact. If such an artifact were to be built, one might reasonably argue that it does exhibit subjectivity/consciousness.

  49. 49. Vicente says:

    Arnold, the weight of that evidence is very light, feather weight on the phenomenal side of the dual nature, definitely heavier on the neurological side.

    The point is that to say “dual monism” is contradictory or meaningless… it shows ignorance about a fundamental element of reality.

    Take for example the case of matter and radiation, and the standard model of particles. Some particles are responsible for matter structure, and others for interactions, forces, gauge fields… Before radiation was discovered, all kind of theories were proposed to explain the observed effects. In a way, to appeal to a dual aspect monism at that time would have made sense, unexplained though. Today, we could say, in a way, that there is dual aspect monism in physics… “matter and radiation” (dual), but both according to the same laws body and with a solid theory that describes the matter-radiation interaction processes (monism).

    Until somebody provides a similar rationale for dual aspect monism (constituent elements and interactions) in the case of consciousness, I consider dual aspect monism bullshit, no evidence for anything.

  50. 50. Kar Lee says:

    Arnold,
    You said, “I don’t claim that brain processes cannot be simulated. What I claim is that a digital/propositional simulation would lack the essential property of having the intrinsic perspectival origin (I!) needed for subjectivity.”

    If brain processes can be simulated, then the simulation will necessarily contain the “intrinsic perspectival origin”. Otherwise, it won’t be a good simulation. That contradicts the assumption that brain processes can be simulated.

    Vicente,
    If we were to use the mathematical model of an artificial neural network, we may not get the full picture. But let’s take Arnold’s assumption that there is no essential quantum effect in the brain (even quantum effect can be simulated!), the brain is an electrochemical system. Conceptually, such a system is a system of molecules moving around interacting with some electrochemical forces. When I was a young graduate student decades ago, I did Molecular Dynamics simulations for a little while. If you are familiar with MD, you probably know that it can be done at an atomic level, molecular level, etc by specifying two-body forces, three-body forces, etc, and these interactions don’t even need to have an analytic form – they can just be some look up tables, as a function of distance and relative orientation. I was able to use some weird type of interaction in my spare time to create a long chain of atoms (a molecule), or a clusters of stars from an originally uniform distribution, and other very entertaining configurations. I imagine, along the same line, we can simulate individual neurons, synapse, ion channels, etc at the molecular level and build up a whole brain. It will be a impossibly huge simulation and definitely beyond the capacity of today’s computing power. But as long as we have the right picture – the brain is indeed an electrochemical system – in principle, the brain can be simulated at the molecular level and nothing will be lost in the simulation. That should include the “intrinsic perspectival origin”.

    But if you can build up a brain from molecules, what is this “intrinsic perspectival origin”? That is something worth discussing.

  51. 51. Vicente says:

    Kar Lee,

    I am not really sure what you mean by “intrinsic perspectival origin”… is it the “self” feeling?

    But if you can build up a brain from molecules, what is this “intrinsic perspectival origin”?

    Well, that is precisely what nature does using embryological and developmental processes, for each new individual with a brain.

    It is a pity that we don’t recall our first months of life, those memories would provide really valuable information about consciousness and the brain.

    Look at the top of the page, if the self is an illusion, who is fooled?

    Anyway, note that a brain simulation has to consider two layers, equally important, the physical layer, neurons simulation, and the logical architecture level, synaptical networks. Probably the perspectival origin, is related to the architectural layer, like the retinoid system, rather than to the biomolecular one.

    You might have modelled pretty well the membranes, channels, exo-vacuoles for neurotransmitters etc etc… but if the networks don’t work, no signals will propagate and the brain won’t start, like somebody suffering a severe cognitive impairment.

    Both layers are coupled but independent.

    It would be fun to have one of those simulations to play huh.

  52. 52. Arnold Trehub says:

    Vivente: “The point is that to say “dual monism” is contradictory or meaningless… it shows ignorance about a fundamental element of reality.”

    I agree that “dual monism” is incoherent. But “*dual-aspect* monism” is *not* incoherent, and I think it is the most sensible metaphysical framework for understanding consciousness.

    Consider what is happening in the SMTT experiment. The subject’s retinoid system generates a conscious image of a triangle (say CT) when there is no retinal image of a triangle. This experience has to be a 1st-person perspective/aspect from *within the brain* of the person having the conscious experience because nobody else in the world can experience and measure this CT. At the same time, the structural and dynamic properties of the retinoid system which, according to theory, generate the subject’s CT are experienced from the 3rd-person perspective, *outside of the brain of the subject*. In the SMTT experiment, two related measurements are made from these two different perspectives:

    1. Subjective perspective (1pp):

    On a category scale: “I see a triangle”

    Objective perspective (3pp):

    On a category scale: The retinoid model generates the shape of a triangle in retinoid space under the relevant SMTT conditions.

    Conclusion: The subjective (1pp) response matches the objective (3pp) prediction.

    2. Subjective perspective (1pp):

    On an ordinal scale: “The base of the triangle gets smaller when I turn the knob clockwise, and larger when I turn the knob counterclockwise.”

    Objective perspective (3pp):

    On an ordinal scale: The retinoid model reduces the length of the base of the triangular shape that is generated in retinoid space if the knob is turned clockwise in the SMTT experiment because this increases the frequency of horizontal oscillation of the occluded triangle. The retinoid model increase the length of the base of the triangle in retinoid space if the knob is turned counterclockwise because this decreases the frequency of oscillation.

    Conclusion:

    The subjective (1pp) responses match the objective (3pp) predictions.

    Since this depends on two different perspectives/aspects — 1pp and 3pp — on the same underlying physical mechanism (retinoid space), I think it is good evidence for the applicability of dual-aspect monism to the study of consciousness.

  53. 53. Arnold Trehub says:

    Re #52: Sorry. Typo. “Vivente” = “Vicente”. :-)

  54. 54. Vicente says:

    Arnold, but all you have are correlates for simple cases, and what is the 1pp experience nature (“made of”), remains a mystery, at least to me. Yes, it is true, that some optical illusions could be predicted on the basis of the neurophysiology of vision, and we can also predict that certain drugs will produce certain effects on your mood, as we know that the concentration levels of certain molecules (dopamine, serotonine…) have an effect on your optimism about life…. as we know that the brain is deeply involve in conscious experience, fine.

    The question is: what is consciousness?

    The answer, consciousness is the 1pp of the biological processes that take place in the retinoid system, to me, is equivalent to say that the reason for which planets orbit around stars is becuase the move around them.

    And… I am being really conservative… accepting a full symmetry between phenomenal processes and their neuronal correlates; i.e. for each phenomenal event there is always a corresponding neuronal process to support it. Nobody has proven such a thing.

  55. 55. Arnold Trehub says:

    Vicente, how can we prove a scientific theory?

  56. 56. Kar Lee says:

    Vicente,
    “I am not really sure what you mean by “intrinsic perspectival origin”… is it the “self” feeling?”
    I copied it from Arnold. I have a vague sense of what he is getting at, but not too sure either.

    “…a brain simulation has to consider two layers, equally important, the physical layer, neurons simulation, and the logical architecture level, synaptical networks.”
    Synaptical network architecture is part of the initial condition when you set the simulated brain free to run on its own.

    It has to been fun to play with someone else brain, even if it is a simulated one.

  57. 57. Arnold Trehub says:

    Kar and Vicente,

    The *intrinsic perspectival origin* is the self-locus, where the *core self* is located in retinoid space. Everything that you consciously experience is experienced at some angular separation from you (somewhere) in the space around you. For example, the floor is under you; the computer screen is in front of you; your feet are below you. If you stand on your head, your feet are above you. If you want to scratch your head, you have to reach up. You have a left shoulder and a right shoulder. If you want to scratch your back, you have to reach behind you. See Figs. 1 and 3, here:

    http://evans-experientialism.freewebspace.com/trehub01.htm

  58. 58. Vicente says:

    Arnold,

    Vicente, how can we prove a scientific theory?

    We can’t, but evidence can prove it wrong, that’s all. Theories are true as long as they are, for a certain range or scope. I very much agree with Karl Popper and his “falsability” concept as a cornerstone for the philosophy of science.

    Getting back to dual-aspect monism… I could accept that for example, Mount Everest shows dual aspect or multiple aspect monism. The mountain is one (monism) but depending on the direction you take to approach it, and the weather, and the time, and the season, the images of the the mountain you get can differ a lot from each other, that is multiple aspect monism. The important thing, is that you have many 1pp for an object (one per observer), if you want to call that 3pp’s, ok. But the mountain itself has no 1pp.

    For the brain case, you can have many 3pp’s, but they are really 1pp’s, since those 3pp’s are nothing but the 1pp’s in other brains. 3pp’s do no exist ‘per se’, they are 1pp in some brain, you see. The neuronal processes, as concepts, are just ideas in other brains, (just to complicate it ad infinitum), and that one too. This is a result of the fact that the concept of observation can not be used in ordinary terms when applied to consciousness.

    What you are saying is: a person looks at a flower, when his brain “looks at his own processes” they look like a flower, when you look at his processes they look like activity patterns… which are your brain 1pp looking at his own processes that look like activity patterns, your colleague looks at your brain and his brain…. and so on.

    I don’t see why any arrange of matter, in the form of a retinoid space or an Intel processor, should entail any kind of phenomenal experience, unless additional “ingredients” are added to the system.

  59. 59. Vicente says:

    Kar Lee,

    Synaptical network architecture is part of the initial condition when you set the simulated brain free to run on its own.

    Initial conditions would be the activity patterns for t=0s. I see the synaptical network as part of the structure of the system, rather that an initial condition. Of course, those networks topology will change with time “plasticity”, but when you do the neurons layout you have to decide what neuron you connect to what other, where, with what strength, kind of synapse…etc… ok could be an initial topology, only if you are interested in plasticity as simulation output. Since it is impossible to do it one by one, some kind of algorythm would be needed, probably we would have to copy natural processes and subsequent pruning…. I am tired just to think of the workload involved…

    You do it, I play.

  60. 60. Arnold Trehub says:

    Vicente: “I don’t see why any arrange of matter, in the form of a retinoid space or an Intel processor, should entail any kind of phenomenal experience, unless additional “ingredients” are added to the system.”

    If “matter” is something in the physical world, wouldn’t the added “ingredients” also be an “arrangement of matter”?

    It seems to me that if the biophysical structure and dynamics of the retinoid system successfully predict previously inexplicable phenomenal experiences, and if there is no “knockout” falsification of the theoretical model, it passes as a credible candidate model of consciousness.

  61. 61. Vicente says:

    Arnold,

    If “matter” is something in the physical world, wouldn’t the added “ingredients” also be an “arrangement of matter”?

    I really don’t know, I believe not, I believe a new and broader concept of reality is required. I said ingredients in a very general sense.

    I think that your theory and model, in what I have understood it, could perfectly be among the most important buiding blocks of a credible candidate model consciousness.

    Arnold, my main laboratory is my mind and personal experience contrasted against the theories and models I come across (like yours). For the time being, no theory have got me close to that aha! moment Peter likes to refer to, and I am afraid I will probably won’t have it in my lifetime. I suspect that the “subjective perspective” the retinoid system provides us with, is located in a position such, that deprives us from the global view needed to understand the whole system.

    My idea is that consciousness is the SUBJECT required for any OBJECT to exist. Consciousness is the base for existence, for pure being. Objects can’t exist without a consciousness that creates them and provides meaning and, consciousness can’t probably exist either without objects that serve as content, and it is that dual subject object relation that we as individual operate. That is why it is so important to be selective with objects we incorporate to our minds. How the system operates, how does intentionality emerge, I don’t know. In a way, consciousness is the fusion, or maybe the events that happen in the interface between epistemology and ontology. The Universe = Subject + Object, but it seems that this subject can’t become an object for itself. Probably, all this last vague rambling sounds empty not to say stupid to you. This is one of the effects I have discovered in my lab, sometimes I have sort of very abstract ideas that I can hardly translate to words, but they appeal to me much more intellectually satisfactory that any available theory. To me this is a profound conscious experience that your model does not consider at all.

    How to bring all these issues to the proper scientific arena? I don’t know.

  62. 62. Kar Lee says:

    Arnold,
    Would you still be interested in addressing my comment [50] above?

  63. 63. Arnold Trehub says:

    Kar, a simulation of the brain’s retinoid system is not a clone of the retinoid system. Unless it is a clone of the system it will not have the *real* properties of the system. The most detailed simulation of a tornado will not destroy your house.

  64. 64. Kar Lee says:

    Arnold,
    Again, simulating a tornado will not destroy your house, but a simulation your brain will act like your brain. If those output signals are fed to your body, your body will act like it has a real brain. So, in brain simulation, the simulation is indistinguishable from the real thing from the outside.

  65. 65. Michael Baggot says:

    Kar, It seems to me that your notion of a MD simulation is in fact a functional clone albeit not a real clone in that the experienced qualia would not be the same, i.e., there would probably be different spectrums for rainbows, taste, etc.. More importantly, such a simulation would not require an actual understanding of the brain’s underlying computational architecture – which I submit is responsible for qualial experience. Quite simply, we would be no further along in our pursuit of neural computation that we already are.

  66. 66. Vicente says:

    Kar Lee and Arnold, #63 and #64 are a great summary of all this is about.

    Kar Lee, behaviorists would agree, the point is that the phenomenal experience, in which conscious subjectivity arises, can’t be produced by the simulation, but you are right that the brain simulation *should* produce (predict) identical motor output signaling (including speech). That simulation would be in real terms: a ZOMBIE brain.

    Then, we would have to see if we can understand and interprete every motor output produced by the simulation, that our bodies take naturally, but we might not.

    Besides, there are internal processes, like retrieving a particular memory, how are we going to check those in the simulation? Maybe you could ask the simulation, what are you recalling, and the “speech output” will tell you.

    But how are you going to load personal biographical data and information in the simulation, how are you going to feed the input signal of the body and environment?

    For the initial information data, first you would need to know the brain code to store it, quite unknown yet, and for the enviroment, you are going to need another virtual reality simulator.

    This can only be a thought experiment for conceptual purposes.

    Another question is, what kind of clones of the brain could be acceptable? probably none, only other real brains.

    Finally, two outputs… physical and phenomenal (dual aspect monist simulation)… can’t help humming the old song lyrics.. “always the same theme… going on…and on… and on…”

  67. 67. Kar Lee says:

    Michael[65],
    Look at it as a molecular level simulation based on the physics of molecular interactions and assuming nothing else. The concept of qualia is not even involved.

  68. 68. Kar Lee says:

    Vicente,
    “how are you going to load personal biographical data and information in the simulation…” hmmm…
    I was hoping that we could choose not to get bogged down by technical details and only work on the concept…. (a brain has to have a structure. so let’s just assume that we somehow have this structure, and not to worry about how it is obtained.) On the other hand, if one chooses to get really involved with technicality, here is an interesting article written by Ken Hayworth titled “Killed by bad philosophy” http://www.brainpreservation.org/content/killed-bad-philosophy. Hayworth has this idea of slicing up the brain into slices of a certain thickness, dissolving the tissues and filled it with some kind of plastics to get to the structure of the brain so as to preserve the memory of a brain and upload it onto the internet.
    However, if I have the choice, I will still want to avoid technicality, so that I can stay in a comfortable armchair doing philosophy. :)

  69. 69. Vicente says:

    Kar Lee,

    Yes, you are right, but note that in this particular case, indentifying the unfeasibility of a certain technicality could make a philosophical whole world difference.

  70. 70. Kar Lee says:

    Vicente,
    I hope in this case the technicality will not be a stumbling block.
    Since we know the brain is made of matter, and it has a structure. We just reason that if we build a virtual model of this material brain, we have a virtual brain. If molecular details is all you need (though I doubt it), if the real brain has one molecule moved, the simulated brain also has one molecule moved (as long as the simulated brain is following the same laws of physics as the real brain does). That should not be too controversial.

    I look at this particular technicality problem in the same way I look at objection to Einstein’s thought experiment inside a free falling elevator: it was thought that when people were in free-fall their hearts stopped (it was early 19th century), and so the free-fall elevator thought experiment could not be carried out and so the equivalence principle must be not right. I would argue that we could put, in the thought experiment, a person who’s heart would not stop under free-all (now we know that it is every one) into the elevator and completed the thought experiment, and the conceptual part is still intact.

    So, we just build a virtual model of a brain (that is simulation), it will necessarily include a “privileged egocentric perspective” that Arnold has been talking about. So it means that Arnold’s definition of consciousness does not need to be realized on a neuronal substrate. Silicon works just fine.

  71. 71. Kar Lee says:

    correction: early 19th century should be changed to early 20th century. :)

  72. 72. Vicente says:

    Kar Lee,

    if the real brain has one molecule moved, the simulated brain also has one molecule moved

    This is the issue. The simulated brain has no molecule. In the simulation, at that step, the coordinates for that molecule take updated values that, if the simulation is correct, are coherent with a possible real movement. On one side you have a real molecule, on the other a variable value, whose only physical existence are the currents in some register of the RAM… That is why I don’t think the simulation will be conscious… maybe comparing that simulation and a real brain could show what’s missing, and if consciousness can be though of having real agency.

  73. 73. Kar Lee says:

    Vicente,
    So, the important things are really the outputs and that are the things that we can compare. At the end of the simulation, there are these electrochemical outputs which have the counterparts in the real brain, both can be fed into a biological body so that the body can speak, can move a limb. If these movements match, the two cases are indistinguishable.

    I don’t believe the simulated brain will be conscious either. I am just saying these for argument’s sake, and keeping up the pressure on Arnold ;)

  74. 74. Vicente says:

    Kar Lee, yeah that’s it. On the other hand, and considering your Universal Mind approach, it could very well be that, we are comparing to “computing” processes after all… just two different HWs to solve the same equations…

    An interesting case would be how the simulation raises emotions. I would be really surprised if those cables plugged into a body or body’s emulation, at some point produce a sad face expression and tears… poor HAL. Maybe a human brain is too much to begin with, but what about a simple reptile one, that mihgt be affordable, just to check the motor part.

    be cautious, you don’t want to see Arnold’s wrath… in argumentative output terms, I mean. I have this impression that his foundations on brain functional allocation and neuropaths are some orders of magnitude better than yours and mine together… :(

  75. 75. Kar Lee says:

    Vicente,
    “I have this impression that his foundations on brain functional allocation and neuropaths are some orders of magnitude better than yours and mine together…”

    I think Arnold is smiling right now. ;)

Leave a Reply