Alters of the Universe

world alterBernardo Kastrup has some marvellous invective against AI engineers in this piece…

The computer engineer’s dream of birthing a conscious child into the world without the messiness and fragility of life is an infantile delusion; a confused, partial, distorted projection of archetypal images and drives. It is the expression of the male’s hidden aspiration for the female’s divine power of creation. It represents a confused attempt to transcend the deep-seated fear of one’s own nature as a living, breathing entity condemned to death from birth. It embodies a misguided and utterly useless search for the eternal, motivated only by one’s amnesia of one’s own true nature. The fable of artificial consciousness is the imaginary band-aid sought to cover the engineer’s wound of ignorance.

I have been this engineer.

I think it’s untrue, but you don’t have to share the sentiment to appreciate the splendid rhetoric.

Kastrup distinguishes intelligence, which is a legitimate matter of inputs, outputs and the functions that connect them, from consciousness, the true what-it-is likeness of subjectivity. In essence he just doesn’t see how setting up functions in a machine can ever touch the latter.

Not that Kastrup has a closed mind, he speaks approvingly of Pentti Haikonen’s proposed architecture; he just doesn’t think it works. As Kastrup sees it Haikonen’s network merely gathers together sparks of consciousness: it then does a plausible job of bringing them together to form more complex kinds of cognition, but in Kastrup’s eyes it assumes that consciousness is there to be gathered in the first place: that it exists out there in tiny parcels amendable to this kind of treatment. There is in fact, he thinks, absolutely no reason to think that this kind of panpsychism is true: no reason to think that rocks or drops of water have any kind of conscious experience at all.

I don’t know whether that is the right way to construe Haikonen’s project (I doubt whether gathering experiential sparks is exactly what Haikonen supposed he was about). Interestingly, though Kastrup is against the normal kind of panpsychism (if the concept of  ‘normal panpsychism’ is admissible), his own view is essentially a more unusual variety.

Kastrup considers that we’re dealing with two aspects here; internal and external; our minds have both; the external is objective, the internal represents subjectivity. Why wouldn’t the world also have these two aspects? (Actually it’s hard to say why anything should have them, and we may suspect that by taking it as a given we’re in danger of smuggling half the mystery out of the problem, but let that pass.) Kastrup takes it as natural to conclude that the world as a whole must indeed have the two aspects (I think at this point he may have inadvertently ‘proved’ the existence of God in the form of a conscious cosmos, which is regrettable, but again let’s go with it for now); but not parts of the world. The brain, we know, has experience, but the groups of neurons that make it up do not (do we actually know that?); it follows that while the world as a whole has an internal aspect, objects or entities within it generally do not.

Yet of course, the brain manages to have two aspects, which must surely be something to do with the structure of the brain? May we not suspect that whatever it is that allows the brain to have an internal aspect, a machine could in principle have it too? I don’t think Kastrup engages effectively with this objection; his view seems to be that metabolism is essential, though why that should be, and why machines can’t have some form of metabolism, we don’t know.

The argument, then, doesn’t seem convincing, but it must be granted that Kastrup has an original and striking vision: our consciousnesses, he suggests, are essentially like the ‘alters’ of Dissociative Identity Disorder, better known as Multiple Personality, in which several different people seem to inhabit a single human being. We are, he says, like the accidental alternate identities of the Universe (again, I think you could say, of God, though Kastrup clearly doesn’t want to).

As with Kastrup’s condemnation of AI engineering, I don’t think at all that he is right, but it is a great idea. It is probable that in his book-length treatments of these ideas Kastrup makes a stronger case than I have given him credit for above, but I do in any case admire the originality of his thinking, and the clarity and force with which he expresses it.

60 thoughts on “Alters of the Universe

  1. This calls to mind something Searle once said:

    “Oddly enough I have encountered more passion from adherents of the computational theory of the mind than from adherents of traditional religious doctrines of the soul. Some computationalists invest an almost religious intensity into their faith that our deepest problems about the mind will have a computational solution. Many people apparently believe that somehow or other, unless we are proven to be computers, something terribly important will be lost”

    On the second part – I’ve discussed this Idealism with Kastrup before, taking mind as the ontological primitive definitely gets around questions left unanswered by appeals to emergence and complexity (though perhaps Bakker’s BBT has a way around that?) but I think it leaves a certain question unanswered which relates to the for-ness of intentionality and consciousness.

    Namely, if consciousness is always *for* someone, as is intentionality, how do we move from the idea of a single mind to multiple minds? How do we go from Whole to Many? I think the DID idea suggests how this might be accomplished but upon reflection there seems to still be a problem in preserving the original Mind coexisting with the alters?

    I’ve also disagreed with him about Panpsychism, at least of the kind endorsed by Freya Matthews and Phillip Goff which is of a more holistic sort. This faces the same Whole -> Many problem as Idealism, but I think that’s less insurmountable that the Combination Problem. I don’t think these ideas should be accepted as definitive from the viewpoint of our scientific consensus, as I grow more inclined to think we just need to accumulate more evidence to overcome the philosophical impasse but I think there’s as much philosophical weight there as for any other -ism.

    As for AI, as I’ve noted I think it’s intentionality rather than subjectivity that presents the death blow to artificial “intelligence”. Since we know so little about subjectivity we can’t really say what has it, though if a Turing Machine can be subjective (though apparently only when running the right program?) why not a thunderstorm, a car engine, or any other instance of “complexity”?

  2. People, while keen to report subjective experience, don’t at all seem to report how they are reporting it. Are they detecting it somehow? Okay, if they can detect subjective experience, why can’t they detect how they are detecting it?? How come that detection is absent any feeling while subjective experience is reported as rich with feeling?

    Part of a problem to consider is people take the subjective experience as the one experience they are having – when really there are two detectors. The second detector both provides stimulus, yet it’s absence makes it seem the subject it focuses on just has that stimulus in it. As a hypothesis, anyway.

  3. Callan: “Okay, if they can detect subjective experience, why can’t they detect how they are detecting it?”

    Because subjective experience as such is not detected — it is just experienced. What we detect are the *sensory components* of subjective experience. We can’t detect how we detect because we have no brain mechanisms that can detect the activity of our own detectors. But it is possible to detect how other brains detect. This is what cognitive neuroscience is about.

  4. Arnold Trehub is right. We can detect neural activities in the brain, but we cannot detect how this activity appears to us, if at all. Here we must trust the report of the subject. The perception theory of consciousness proposes that consciousness is nothing more than the internal appearance of our sensory signals. Our thoughts and imaginations become conscious as soon as they are returned into virtual percepts. The Haikonen architecture does this. (Haikonen: Consciousness and Robot Sentience, World Scientific 2012). Conscious percepts are experienced. Consider e.g. rhythms. In the computer the internal appearance and experience remain missing. The robot XCR-1 is an experiment towards neural robots with possible internal appearances. I respect Kastrup’s view, but I am also very skeptical about the existence of “tiny parcels of consciousness”. Consciousness is not a substance, it is the way in which we experience some of our neural activity. BTW, nice website!

  5. @Pentti Haikonen: As I understand it Kastrup is saying that for artificial intelligence to be viable there would already have to be some kind of consciousness involved. I believe it’s the same objection Harris has made when he notes that claiming to get consciousness from non-conscious matter is asking for a something-from-nothing miracle:

    http://www.samharris.org/blog/item/the-mystery-of-consciousness

    “Consciousness—the sheer fact that this universe is illuminated by sentience—is precisely what unconsciousness is not. And I believe that no description of unconscious complexity will fully account for it. It seems to me that just as “something” and “nothing,” however juxtaposed, can do no explanatory work, an analysis of purely physical processes will never yield a picture of consciousness. However, this is not to say that some other thesis about consciousness must be true.”

    So only by having tiny bits of consciousness or proto-consciousness (which some pansychists claim as a thing) could any artificial intelligence ever have a subjective inner life. Otherwise the project -at least according to this argument- is doomed from the start.

    Katrup’s own view – as per my understanding of our discussions – is Idealistic, so consciousness is all there is rather than “bits of consciousness” being in matter.

  6. @Sci,

    Science does not *fully account* for any physical process. Why should we expect to fully account for consciousness? Even though we can’t explain the sheer existence of consciousness, I think we are making real progress in explaining/ understanding the biological mechanisms that give us our conscious experiences.

  7. Ah, I was simply noting that Kastrup wasn’t advocating a panpsychist “bits of consciousness” view.

    I referenced Harris because it seems to me the argument Harris makes is akin to the one Kastrup is making.

  8. Sci: Kastrup proposes that consciouness is an ontological primitive like (possibly) electrons that cannot be explained any further. From this he concludes that consciousness cannot be artificially created. Instead, it is consciousness that creates the whole material world and universe (Remember Bishop Berkeley, boo, boo!). This is not necessarily a logical or factual truth. The explanation of consciousness as some immaterial primitive, is not an explanation at all; here what is to be explained is explained via unexplainable and unobservable. As an engineer I cannot accept that. And moreover, that approach would be fruitless for AI. But one should not be naive, either. 50 years of search for algorithmic consciousness has produced nothing, so the answer is elsewhere. I do not think consciousness is a computational algorithm and therefore I study real (not simulated) systems with dynamic reactions, systems that might experience. Robot XCR-1 has externally detectable inner virtual percepts, which it itself can verbally report, but do these appear internally as something is an open question. If they did, XCR-1 would be phenomenally conscious to some minuscule degree. And once more, in my work I do assume “immaterial bits of consciousness” whatever they might be.

  9. Arnold,

    Because subjective experience as such is not detected — it is just experienced. What we detect are the *sensory components* of subjective experience. We can’t detect how we detect because we have no brain mechanisms that can detect the activity of our own detectors. But it is possible to detect how other brains detect. This is what cognitive neuroscience is about.

    If cognitive neuroscience can figure that out, can ‘subjective experience’ explained by a hypothesis that the reported ‘subjective experience’ comes from a second stage detector detecting the first stage, but not detecting itself whilst doing that. Thus stimulus just seems to occur – and is ‘experienced’. With no reference to a second detector, the responces of the first detector (having been detected by the second stage) would appear to be as real as a rock or tree. Subjective experience would appear as real as a rock.

    Imagine if you had a sensor wired to monitor a certain section of synapses in your brain – and it’s output is wired to the nerves in your fingers, the ones that report touch.

    When you would think a certain thing, you’d get the sensation of touching something. It would feel like thinking that thing is an actual object – as much as a rock is worthy of being reported as a thing, so too would the thought seem worthy of being reported as a thing.

  10. @Pentti: Thanks for the clarification. Will definitely take a closer look at your work. Like Searle I can see a physical recreation of the brain in a robot/android – as opposed to a mere program – having mental content. But to be convinced I think I’d first have to know how our brains produce consciousness & intentionality.

  11. Sci,

    Intentionality is the easy part. In the robot XCR-1 neural signal patterns are about something and mean something, because their meaning is causally grounded. XCR-1 operates as if it were consciously perceiving its mental content, but of course this content is very, very limited in scope and resolution. However, this shows that the basic principle of operation (HCA architecture) is a valid concept for cognitive control. And effective, too, because of its parallel neural operation. The hard question is: does XCR-1 have internal qualitative appearances? If this could be solved, then perhaps we would also understand how the brain produces consciousness.

  12. Pentti,

    Does Robot XCR-1 have an internal analogical representation of the volumetric space in which it exists, that includes a representation of some part of its structure as a fixed locus of perspectival origin within this spatial representation?

  13. Arnold,

    Currently XCR-1 has very limited visual perception capacity and it cannot form spatial “representations”. However, the realization of this capacity would be a trivial task, more neurons. Unfortunately there are no associative neuron group chips available and therefore all the neurons are realized by operational amplifier chips and discrete components and these take a lot of space (and lot of work). This shortcoming does not necessarily contribute to the question of consciousness. However, XCR-1 has touch sensors in its “hands”, global shock sensing capacity, “petting” sensor and the ability to sense its own motion. These sensations relate to the “self” of the robot. The robot is also able to hear and recognize some spoken words so that a limited verbal interaction is possible as demonstrated in my “RobotCognition”, “Robot Pain” and “Scare Robot” demovideos. https://www.youtube.com/user/PenHaiko

    BTW, I have your book “The Cognitive Brain” on my desk. That is a really nice early book on the neural approach. I have mentioned it in my book “Robot Brains”.

  14. Pentti,

    Thanks for your kind words.

    You say “Currently XCR-1 has very limited visual perception capacity and it cannot form spatial “representations”. However, the realization of this capacity would be a trivial task, more neurons.”

    I don’t doubt that more neurons would enable it to form spatial representations. But what kind of spatial representations? If you can show how artificial components can be assembled to create an internal analogical representation of the volumetric space in which the robot exists, that includes a representation of some part of the robot’s structure as a fixed locus of perspectival origin within this spatial representation, this would be an important step toward the creation of artificial *subjectivity*. I have asked other knowledgeable investigators if they know of such an artifact, but so far, none do.

    You might be interested in looking at my article “Where Am I? Redux” on my Research Gate page.

  15. Arnold,

    Spatial awareness and understanding calls for motion. Small babies reach out to every direction with their tiny hands and in this way they associate the seen positions with the neural motor commands that are necessary for the corresponding position of the hands. After this learning process they can reach out correctly for a seen object and also point towards far away objects. In a similar way larger scale spatial understanding arises when babies learn to walk to distances. This is a trivial task for the neural association based HCA architecture, which uses the same learned “mapping” in imaginations and motion planning, so I wonder if I have understood correctly your question.

    I do not like very much the concept “representation” at that level of operation, because one would have to ask: To whom they represent? In the HCA the information is stored as a large number of synaptic strengths that cannot (or even need to) be introspected or observed as such by the system. Consciously perceived appearances might be called representations.

    I will see your article.

  16. Pentti, thanks for posting about your research. That’s just the kind of thing I’d like to see more of, plus, it’s clear that you’re very aware of the problems you’re facing and careful not to overstate your results—too often, similar research is couched in terms that makes it seem as if generating proper consciousness really is just an engineering problem at this point. The only thing I would maybe quibble about is your claim that intentionality is founded in causal relations…

    But what actually fascinated me most about your videos is how quickly I attribute internal states to your XCR-1—I kept viscerally reacting to the plight of the scared robot, or of its response of being in pain (No! Don’t hurt the poor little robot!), even though they are of course very rudimentary imitations of the complexity of human or even animal responses. Of course, if we really have grounds on which we judge the robot to possess even some glimmer of phenomenology, then one might raise ethical issues about such experiments—do we have the right to bring into existence beings just to experiment on them, and, in doing so, subject them to what might be intolerable pain? (Thomas Metzinger has discussed this at length.)

    Of course, I don’t believe we’re quite at that point yet; but the mere point that it’s beginning to cast its shadow on the horizon shows just how much progress there is in this field.

  17. Jochen,

    Thanks for the comments. In the philosophy of mind intentionality is about the observation that the contents of consciousness is about something, i.e. it has meaning. Causal grounding works well here (also in the XCR-1) with some twists, see my books. In everyday speech intentionality has another meaning: I have the intention to do something. This would be realised via needs and plans.
    Is it unethical to produce pain experiencing beings? Maybe. However, people make babies. Without a little bit pain and pleasure it would be difficult to teach any practical or moral values to the robot (or a child.) A practical value is e.g. one that says that do not bump into obstacles or touch a flame. You try, get hurt and learn.

  18. Pentti: “Small babies reach out to every direction with their tiny hands and in this way they associate the *seen positions* [emphasis added] with the neural motor commands that are necessary for the corresponding position of the hands.”

    The *seen position* of the hand is at the crux of the problem of consciousness. The hand is seen (consciously experienced) in a surrounding 3D space in relation to a locus of perspectival origin within the child doing the seeing. But the child has no sensory apparatus to detect the global volumetric space in which he/she exists. Because of this sensory lack, I have proposed the innate neuronal structure and dynamics of the retinoid system to provide the required egocentric (subjective ?) representation in the brain. It seems to me that any artifact to be called a conscious robot must have this kind of mechanism built into it.

  19. In the philosophy of mind intentionality is about the observation that the contents of consciousness is about something, i.e. it has meaning. Causal grounding works well here (also in the XCR-1) with some twists, see my books.

    Yes, I did mean intentionality as originated by Brentano. And I’ll check out your books (I just found out I can access Robot Brains online via our university’s library), but so far, I’ve found proposals to ground intentionality in causality to be lacking, ultimately for the reason that causal links are structural, and don’t suffice to pin down actual content—i.e. you may get at relations between things, but not at the things themselves; but to our introspection, it seems that the intentional character of our thoughts is always directed at a concrete thing, rather than at an empty web of relations.

  20. Arnold,

    Normal children usually have a very good sensory apparatus for the perception of the surrounding 3D space, namely eyes. However, also blind people are considered as conscious beings.

    Jochen,

    Intentionality must begin with perception. Our sensors provide the brain with neural signals that are causally connected to the perceived entity. This calls for “rigid” wiring and sub-symbolic approach, not symbols that would have to be interpreted. In the brain these signal patterns appear as qualia that are self-explanatory. (How does this happen; that’s the hard problem -or is it?). Next these patterns can be associated with other patterns and in this way they can be used as symbols, (e.g. words for things and matters, in this case these linguistic symbols are perceived sound patterns or sound qualia). The same applies to every other sensory and motor modality. The web of relations resides in the strengths of synapses and as such it cannot be an object for introspection. “The empty web of relations” is only a possibility in a philosopher’s mind. RIP priest Brentano. Robot XCR-1 may be simple, but not so simple that this problem would not manifest itself, if it were real. I do not know if programmed symbolic AI has these kinds of problems.

  21. Pentti: “Normal children usually have a very good sensory apparatus for the perception of the surrounding 3D space, namely eyes. [1] However, also blind people are considered as conscious beings. [2]

    1. This is a common misunderstanding. The 2D retinas of the eyes cannot sense the surrounding 3D space. A special kind of post- retinal brain mechanism is needed to give us an egocentric representation of the coherent volumetric space we live in.

    2. Blind people are conscious because they too have this brain mechanism that represents the space around them from an egocentric perspective. Without it they would be helpless.

  22. Our sensors provide the brain with neural signals that are causally connected to the perceived entity.

    Of course, the problem I see is merely that this data does not suffice to fix the identity of the perceived entity. Or, in other words, a non-intentional system only receives some pattern of data from the outside world, and there’s nothing intrinsically that makes the pattern of data pertain to one object, rather than another—this is just an arbitrary interpretation. So, we can say that the reaction of a robot is about something in its environment—say, a tree—because that tree (and the robot’s reactions) are simultaneous intentional content of our mental picture of the world. But in then ascribing intentionality to the robot, that is action was ‘about’ the tree, is to misplace our own intentionality.

  23. Arnold,

    Of course you are right that large neuron groups are necessary for the processing of visual information. However, it is not true that 2D eyes could not extract 3D information. Cover your other eye and move your head; you will see that objects “move” behind each other and in this way you can determine their relative 3D positions. Static image understanding is a difficult task even for the human brain. Therefore visual sense making is an active bottom-up and top-down process that among other tricks relies on sub-conscious experiments like these. -In the HCA architecture visually seen objects are associated with their corresponding visual directions, which are actually motor signals that guide head and gaze directions. Therefore the imagination of a gaze direction will evoke an imagined image of the object last seen there and vice versa. This is a short-term memory operation that is responsible for visual situation awareness. In “Robot Brains” I explain how long-term memories are formed from these.

  24. Jochen,

    You should see that there are two aspects in sensory perception. Firstly, what is perceived is the sub-symbolic sensory neural signal pattern that appears as a self-explanatory quale. Red is red, pain is pain, form is form, no other information is required. Secondly, it is possible to associate further information with the perceived raw patterns. This is what babies and little chíldren do, when they begin to understand their environment. E.g. when we see a hammer, we know what possibilities of action there is. For a robot a hammer is basically an object with form. However a HCA-based robot can associate any piece of information and any feasible use and action with a hammer, just like we do. So what is the problem here? What is the act of intentionality that humans can do and robots cannot?

  25. Pentti,

    Extracting 3D information by head motion is nothing like having a global coherent 3D representation of your surrounding space. We could not possibly look at a road map and understand how to use it to reach a particular destination without an internal representation of the space to which the map refers.

  26. Arnold,

    It is a great honor to argue with a wellknown prominent expert like you. We both agree that a large number of organized neurons is needed for visual information. We can call these neurons as we like. However, I have done successful orienteering in unfamiliar wooded countryside with maps and I can assure you that I did not have any 3D representations in my mind. If I had then I would not have needed any maps. The map is the representation of the terrain and that is what I compare to landmarks that I see. The additional necessary information in my head is the meaning of the symbols in the map. The space to which the map refers is outside.

  27. Pentti: ” The space to which the map refers is outside.”

    True, but the space that you must experience while looking at the map and relating it to the actions you must take is inside your brain. As you read these words, you see a computer screen some distance in front of you. You experience the space between the computer and your self, and you experience the space in which you and the computer and the walls around you exist. All of this space must be represented within your brain for you to experience it. If it is not in your brain then your personal conscious experience of this space must exist in a separate, non-physical domain — mind-brain dualism — outside the domain of science.

  28. Arnold,

    I may not understand exactly what you mean. In technical terms I see no need for a robot (or myself) to experience empty space between observed objects. A robot is able to grab a nearby object solely by knowing the distance and direction to the object. I see no reason to represent this empty space inside a robot; what additional useful or even necessary information would this convey? And how should the empty space be represented in a neural network? XCR-1 robot does not have any internal representation of empty space between it and a target, yet it is able to home in on the target and grab it.

  29. Pentti: “A robot is able to grab a nearby object solely by knowing the distance and direction to the object.”

    The robot knows nothing about distance and direction. Just as a photo-mechanical door opener knows nothing about the presence or absence of light. A robot is able to grasp a nearby object because it has been designed by a human, who knows about distance and direction, to detect the presence of an object and systematically convert the robot’s imaging signals into direction and distance signals to control its “graspers”.

    Whether or not I/you need to experience empty space between objects, I know that I experience it and I bet that you experience it too. Not only that, but I know that there are empty places in the space on top of my table into which I can put objects.

  30. Arnold,

    What is “knowing” other than having the information and being able to use and possibly report it? What mystery ingredient is missing? I do not think it matters who has designed what here.
    Nevertheless, my point was that no representation of empty space between the subject and the target is necessary from a technical point of view. An empty space on the table is another story. My wife always laments that the space on my table should be more empty.

  31. Pentti: “What is “knowing” other than having the information and being able to use and possibly report it?”

    Would you say that a photo-mechanical door opener *knows* about light beams. Or does it just open a door when light does not hit its sensor — a sensory-motor response? I agree that from a technical point of view a representation of empty space is not necessary, but if we are proposing conscious robots, a representation of the surrounding volumetric space is needed to present something somewhere in relation to a fixed locus of perspectival origin. Otherwise I don’t conceive of a subject having a conscious experience.

  32. Arnold Trehub: “The *seen position* of the hand is at the crux of the problem of consciousness. The hand is seen (consciously experienced) in a surrounding 3D space in relation to a locus of perspectival origin within the child doing the seeing. But the child has no sensory apparatus to detect the global volumetric space in which he/she exists.”

    No sensory apparatus? Wait a minute. Stereoscopic binocular vision isn’t present at birth, true, but it develops in infancy, so that if you assert that a child has no volumetric space detector, you need to specify what age you’re talking about. And vision isn’t the only sense that bears on forming an idea of space. Hearing is “binocular,” too: infants hear sounds originating in various directions. Vestibular sensations (inner ear) likely also contribute. Before infants can get about on their own or even turn their heads to change the view, they are lifted, carried, put down, turned over, rocked. These actions produce experiences of moving, in different ways, through space.

    How an infant perceives the space between one object and another or the whole world of space around his or her body must be largely speculative. However, I believe that infants are conscious without needing to be so Cartesian as all that.

  33. Arnold,

    A photo-mechanical door opener has the information about light beams and it reacts to these. Therefore it should be right to say that it knows something. We also say that a dictionary knows something. However, I am not claiming that door openers and dictionaries were conscious. A conscious entity would have a reportable flow of percepts with qualities (qualia). It would perceive its environment and itself directly and apparently as they are. (Pls note the word “apparently”). The ability to feel phenomenal pain is a good indication of consciousness.

    Cognicious,

    I fully agree, exactly so! (For robotic sound direction detection kindly see https://www.youtube.com/watch?v=9z38gJ5JSyY).

  34. Cognicious,

    Stereoscopic binocular vision does not demonstrate that we have sensory receptors that can detect the space around us. On the contrary, stereoscopic vision seems to depend on the conversion of 2D retinal disparities into neuronal patterns within a post-retinal 3D structure that is the brain’s representation of a volumetric space. For example, see “Modeling the World, Locating the Self, and Selective Attention: The Retinoid System” on my Research Gate page.

  35. Pentti, thank you for reinforcing my opinion.

    As a matter of language use, I wouldn’t say a photomechanical door or a dictionary knows something, not if I were being careful. I might say informally that a stuck door doesn’t want to open, or the door doesn’t see me standing here, but such expressions are the furthest thing from rigorous description.

    By the way, your video “decided not to” play for me.

    Arnold, yes, of course the sensory receptors in the retina don’t by themselves detect space. They don’t detect objects, either! Peripheral receptors don’t create meaningful/understandable experiences without the CNS. The retinal cells are only the first part of the visual system. They’re sensitive to light, that’s all. They send electrical impulses up the optic nerve, and the brain does the rest. No doubt, you already know this. I can’t tell just what point you’re making by saying that stereoscopic vision doesn’t demonstrate, etc.

  36. Cognicious: “I can’t tell just what point you’re making by saying that stereoscopic vision doesn’t demonstrate, etc.”

    The point that I was making is that, contra the opinion of some, our stereoscopic vision is not evidence that our sensory receptors detect space (a point on which you agree). Furthermore, we are unable to understand how stereopsis can work without our brain having an innate analog representation of 3D space into which our 2D retinal signals can be projected.

  37. Cognicious,

    The video link worked ok for me just a moment ago. Can you see any other youtube videos? Do you have Adobe flashplayer and is it activated?

  38. Arnold: “The point that I was making is that, contra the opinion of some, our stereoscopic vision is not evidence that our sensory receptors detect space (a point on which you agree). Furthermore, we are unable to understand how stereopsis can work without our brain having an innate analog representation of 3D space into which our 2D retinal signals can be projected.” Well, our sensory receptors record information of kinds from which, as our vision matured, we have learned to infer space. Stereoscopic vision is just one way we make a scene look spacey. Atmospheric perspective and the apparent sizes of things are a couple of others. I agree that there is no “space sense” as there is a “sound sense,” for instance.

    The word “innate” in your statement of the brain’s contribution to stereoscopy is an interesting choice. How did you exclude the possibility that a person builds up a mental representation of 3D space through accumulated experience and perhaps with the help of cognition?

    Pentti: “The video link worked ok for me just a moment ago. Can you see any other youtube videos?” I can see some others but not all others. I used to see them all. This problem may have started with my latest Firefox download, I don’t know.

    Peter, will checking the box “Notify me of follow-up comments by email” produce notifications for articles where I check the box but not for other articles? The other box apparently threatens to alert me whenever a post appears anywhere on this blog.

  39. Cognicious: “How did you exclude the possibility that a person builds up a mental representation of 3D space through accumulated experience and perhaps with the help of cognition?”

    Of course, I might be wrong but, excluding magic, neither I nor any other investigator has been able to show how a global coherent representation of 3D space can be built in the brain by learning and inference.

  40. Arnold: “Of course, I might be wrong but, excluding magic, neither I nor any other investigator has been able to show how a global coherent representation of 3D space can be built in the brain by learning and inference.” Then have you or other investigators been able to show that a global coherent representation of 3D space exists from birth, or at least that some sort of template for it does? What’s missing for me is a reason that the innateness hypothesis should be the default, whereas the acquisition hypothesis would need evidence.

  41. Arnold,

    I am convinced that no 3D retinoid exists in the brain. If we assume that one million pixel signals arrive from the left and right eye retinas then the size of your 3D retinoid would be 10exp6 times 10exp6 giving 10exp12 neurons. Unfortunately it is understood that in the brain there are only 10exp11 neurons. 3D analog mappings just take too much space. Secondly, our eyes do not operate as you appear to assume. When we look at an object near or far, our eyes rotate so that the object always projects on the foveas. Whenever this does not happen, we see double images. This also means that in your 3D retinoid the middle neuron would always register correlation and accordingly a wrong location.

  42. Pentti,

    1. There is no reason to assume that 1 million pixels arrive at the 3D retinoid. Convergence and compression along the pathways should be expected. What is critical is that retinotopy and spatiotopy be conserved in whatever the total volume of retinoid space might be.

    2. Of course there is binocular convergence in visual fixation, but I don’t see why that would cause a problem.

  43. Arnold,

    1. If the system is acquiring its visual information from the retinoid, then the requirement of one million pixel resolution is an underestimate, already including compression. The fovea is able to provide this at each gaze direction and when the whole environment is scanned, the number of acquired pixels will be tremendous. Actually the shape of your 3D retinoid should be that of hemisphere, filled with neurons for all possible object locations.
    2. I refer to fig 5 in your paper “Space, self, and the theater of consciousness”. I assume that the fovea is in the middle of your retina. When the eyes are focused at an object at any distance, then the middle receptors are excited and consquently the middle neurons in the retinoid register correlation, no matter how far the object actually is. You might also want to consider, what happens when the object is at infinity; which point is indicated now in the retinoid? It appears to me that the geometry of your idea is not working.
    3. Consider a case when an object behind an obstruction is seen by one eye only. How does your 3D retinoid operate now, when no correlation between the left and right eye images of that object exist?
    4. From the engineering point of view, the 3D retinoid is an unnecessary complication that calls for immense number of neurons while providing no additional information compared to more economical short-term memory systems. I find it hard to believe that most of the neurons of the brain were dedicated to this purpose.

  44. Pentti, addressing Arnold: “You might also want to consider, what happens when the object is at infinity; which point is indicated now in the retinoid?” Presumably, a smaller set of points than when the object is close, but centered at the same position in the visual field. If you’re asking, by implication, how the viewer is able to perceive the object as farther away without trying, that ability is acquired as the infant’s brain adapts to life in a 3D world despite having only 2D input from each eye.

    If the retinoid model posits a one-to-one correspondence between rods + cones and neurons in the visual cortex, it can be attacked on mathematical grounds, as you’ve done. (But why not throw color into the pot also and imagine that every wavelength that might meet a retinal cell requires its own neuron to render that hue?) Suppose, however, that the brain operates with a sampling of its inputs at each moment, creating a coherent picture from a spot check of retinal stimuli. It’s already known that the brain fills in backgrounds.

    I had thought that the retinoid model sought to explain how we acquire an understanding of 3D space in general–how we get the concept–but this may be wrong. Perhaps its mission is limited more strictly to vision.

  45. Cognicious: “I had thought that the retinoid model sought to explain how we acquire an understanding of 3D space in general–how we get the concept–but this may be wrong. Perhaps its mission is limited more strictly to vision.”

    The retinoid model explains how we get the general conscious *experience* of a surrounding 3D space. It is not strictly limited to vision. Furthermore, the retinoid model does not posit a strict “one-to-one correspondence between rods and cones” and the neurons in the visual cortex. There is obviously much that is not known about the details of synaptic connectivity patterns between retina, lateral geniculate, and primary visual cortex. Conjectures about capacity limitations are just that — conjectures.

  46. Pentti,

    A visual after-image appears larger when we fixate a distant surface, and it appears smaller when we fixate a near surface. How can this be explained without assuming something like the retinoid model?

  47. Arnold, I know you weren’t talking to me, but . . .

    An afterimage likewise appears larger when we fixate a large surface than when we fixate a small surface at the same distance. The determinant of afterimage size is the area of exhausted, overstimulated retinal cells. Does the retinoid model come into play in this instance?

  48. Cognicious: “An afterimage likewise appears larger when we fixate a large surface than when we fixate a small surface at the same distance.”

    Do you have a reference for this?

    The area of fatigued retinal cells directly determines the retinal image size, but changes in the after-image perception for a given retinal image is determined by the distance of the fixation surface. This, according to the retinoid model, is caused by its size-constancy mechanism operating on a fixed retinal image over varying depth planes (Z planes).

  49. Arnold: “Do you have a reference for this?” No, just ordinary observation. An afterimage formed by fixating a large display is larger than one formed by fixating a small display. The difference is obvious for negative (dark) afterimages in response to light bulbs of different sizes.

    I now think I initially misinterpreted your Post #48. You were referring to the apparent sizes of afterimages when projected onto neutral surfaces at differing distances, not the apparent sizes of afterimages when formed, right?

  50. Arnold,

    About the visual afterimage. A nearby object covers more of far away objects than nearby ones? I do not pretend to be expert here. How exactly does the 3D retinoid model explain this and where in the retinoid is the location of the afterimage? Is it near or far or in between?

  51. Cognicious: “I now think I initially misinterpreted your Post #48. You were referring to the apparent sizes of afterimages when projected onto neutral surfaces at differing distances, not the apparent sizes of afterimages when formed, right?”

    Right.

  52. Arnold,

    The fig 5 in your paper “Space, self, and the theater of consciousness” gives the impression that the location of an object inside the 3D retinoid is determined by the relative positions of the images of the object on the left and right retinas. Obviously afterimages have fixed positions on the retinas and therefore their location and apparent spatial position should be fixed. Now you seem to maintain that the apparent spatial position of an afterimage can change. How can that be?

  53. Pentti,

    Fig. 5 does not show the location of an afterimage. It shows the locations of real objects. In this case the retinal size of objects decreases with distance, and size constancy provides a compensatory mechanism. But the retinal size of an afterimage does not change, so size-constancy, regulated by fixation distance, produces an anomalous effect — the perceived image grows larger as the distance of a fixation surface increases.

  54. Arnold, I lack the background in neurology to follow your argument in detail. I get, at least, that you credit the retinoid system with size constancy. One important question would be whether alternative explanations of size constancy exist.

    There’s something curious about the way afterimages shrink and stretch with viewing distance. When I close my eyes, an afterimage is very small. Now, my eye muscles are relaxed, so that I’m presumably focusing on infinity. Extrapolating from what happens at external focal planes, the afterimage should instead be very large. What do you do about this reversal?

  55. Cognicious: “One important question would be whether alternative explanations of size constancy exist.”

    I know of no alternative explanation. I’ve asked many investigators, but so far there have been no suggestions. Anybody here know of an alternative explanation?

    I don’t know what to expect with an afterimage with eyes closed.

  56. Arnold,

    Retinal afterimages are formed by real objects. Therefore I see no reason, why fig. 5 would not apply also to afterimages. Therefore, the corresponding location of an afterimage inside the 3D retinoid should be that of the location of the original object. How does your retinoid know that the visual neural signals of an afterimage should be treated differently (not like in fig 5.) from those of a real object?

  57. Pentti: “Therefore, the corresponding location of an afterimage inside the 3D retinoid should be that of the location of the original object.”

    Not at all. Without the original object in the visual field, the locus of fixation is indeterminate until an external surface is fixated. It is the location of this fixation that determines the location of the afterimage inside the 3D retinoid, and the perceived size of the afterimage.

Leave a Reply

Your email address will not be published. Required fields are marked *