Arnold TrehubIt’s not often these days, it seems to me, that a ‘theatre of consciousness’ is proposed or defended: if the idea is mentioned, it’s more commonly to dismiss it or to distance a theory from it.  All the more credit then, to Arnold Trehub whose Retinoid theory is just that, and a bold and sweeping example, too.

A retinoid system, at its simplest, is an array of neurons which captures and retains a pattern of activity in the retina, the sensitive layer at the back of the eye which translates light into neuronal activity. The retinoid array retains the topology of the original pattern and because the neurons are autaptic, ie they re-stimulate themselves, that pattern can be retained for a while. Naturally we need more than one such array to do anything much, and in fact the proposition is that a series of them act together to form a model of three-dimensional space. That makes it sound a simple matter, but of course it isn’t. Trehub proposes a series of mechanisms by which raw activity in the retina can be translated into a stable, three-dimensional model, making allowance, for example, for the way our foveas (the small central parts of the retinas that actually do most of the important work – our vision is actually only sharp in a tiny central area of the visual field) sweep restlessly but selectively across the scene and so on.

Those proposals look ingenious and plausible. I wonder a bit about what the firing of a neuron in the retinoid model of space actually represents, in the end? It isn’t simply patterns of retinal activity any more, because the original patterns have been amended and extended – so you might argue that the system is not as retinoid as all that.  It’s tempting to think that each point of activity is a bit like a voxel, a three-dimensional pixel or a single tiny brick in a model world, but I’m not sure that’s right either. It certainly goes beyond that because Trehub wants to bring in all the sensory modalities. How a spatial model works in this respect is not fully clear to me – I’m not sure how we deal with the location of smells, for example, something that seems inherently vague in reality. Perhaps smells are always registered in the retinoid as situated within the nose, although that doesn’t seem to capture the experience altogether; and what about hearing? It’s a common enough experience to hear a sound without having any clear idea of its point of origin, but it won’t quite do to say that sounds are located in the head either. I don’t know how “sounds sort of like it’s coming from over there” is represented in a spacial retinoid model. If we’re going on to deal with the whole world of mental phenomena, we’re clearly going to run into more problems over location – where are my meditations, my emotions, my intimations of immortality, my poignant apostrophisation of the snows of yesteryear?

Another key element in the system is a self-locus retinoid whose job is essentially to record the presumed location of the self in experienced space (and also, if I’ve understood correctly, imagined space, because we can use other retinoid structures to model non-existent spaces, or represent to ourselves our moving through locations we don’t actually occupy). This provides the basis for our sense of self, but importantly there’s much more to it than that.

Trehub suggests that the brain also provides a semantic token system, so that when we see a dog, somewhere in our brain a set of neurons that make up the ‘dog’ token start firing. We also have a token of the self-locus (roughly speaking this amounts to a token of ourselves or something like the idea of ‘me’), which Trehub designates I!  This gives us our sense of identity and pulls together all our tokens and perceptions. It also has an interesting role in the management of our beliefs. In Trehub’s view our beliefs are encoded by tokens in a semantic network as explicit propositions: whether we consider these propositions true or false is determined by a linkage with a special token which indicates the truth value, while a similar linkage to I! determines whether these beliefs belong to us. I presume a linkage to the ‘Fred’ token makes them, in our view, Fred’s beliefs, providing the basis of a ‘theory of mind’.

We now have in place the essentials for the theatre of consciousness: a retinoid-supported model of egocentric space (Trehub calls it egocentric, though it seems to me the self is always just to the side of it looking in), and a prime actor endowed with beliefs and perceptions. Trehub denies that we need an homuncular audience,  a ‘little man in our head’.  He casts us as actors on our own stage; if there is an audience it’s the unconscious parts of the mind. He accepts that the theatre is in a sense illusory – the representations are not the reality they stand in for – but he wishes to keep the baby even at the price of retaining that much bathwater.

So what can we say about all this? As a theory of consciousness it’s dominated to an unusual degree by vision and space. This fits Trehub’s view of consciousness: he regards the most basic form as a simple sense of presence in a space, with more complex levels involving an awareness of self and then of everything else. I’m not sure a spatially-based conception fits my idea of consciousness so well. When I examine my own mind I find many of the most salient thoughts and feelings have no location; it’s not that their location is vague or unknown, space just doesn’t come into it. Trehub suggests that when we think of a dog, we’re liable to have an image of a dog in our mind: while we can have one, it doesn’t seem an essential part of the process to me. If I’m thinking of an arbitrary dog, I don’t call up an image, let alone one situated in an imaginary space: if you ask me what colour or breed my arbitrary dog is, I have to make something up in order to answer. (In fact the position is complicated because reflection suggests there are actually at least half-a dozen different ways of thinking about an arbitrary dog: some feature images, some don’t.)

Second, the theory centres on a model of space, but I’m not sure how much having a model does for us. Putting it radically, if you can work out what you’re going to do with your model, why not do that to reality instead and throw the model away? Now of course, models are useful for working on counterfactuals; predicting outcomes, testing contingencies and plans, and so on, but here we’re mainly talking about perception. If we stick to pure perception, doesn’t it seem that the model merely introduces another stage where errors might creep in, a stage which apparently insulates us from reality to the point that our perceptions are actually in some sense illusions?  Trehub might respond that the model has evolved for its value in dealing with predictions and planning, but also provides current perception; maybe, but there’s another problem there. The idea that my current perception of the world and my plans about next week are essentially running on the same system is not intuitively appealing – they feel very different.

The inevitable fear with a model-based system is that the real work is being deferred to an implied homunculus; that the model is, in effect, there for a ‘man in our head’ to look at. Trehub of course denies this, but the suspicion is reinforced in this case by the way the model preserves the topology of the retinal image: isn’t it a little odd that, as it were, the process of perceiving a shape should itself have the same shape?

Trehub has a robust response available, however; the evidence shows clearly that the brain does in fact produce models of perceived objects, filling in the missing bits, resolving ambiguities and making inferences. Many optical illusions arise from the fact that the pre-conscious parts of our visual system don’t tell us which bits of the world they’ve hypothesised, but present the whole thing as truly and unambiguously out there. Perception is not, after all,  just a simple input procedure but a complex combination of top-down and bottom-up processes in which models can have an essential role. And while I don’t think the retinoid structure specifically has been confirmed by research, so far as my limited grasp of the neurology goes it seems to be fully consistent with the way things seem to work.  Although it may still seem surprising it’s undeniably the case, for example, that the visual pathways of the brain preserve retinal topology to a remarkable degree. So as a theory of perception at least, the retinoid system is not easily dismissed. As a model of general consciousness, I’m not so sure. Containing a model of x is not, after all, the same thing as being aware of x. We need more.

What about the self, then? It’s natural that given Trehub’s spatial perspective he should focus on defining the location of the self, but that only seems to be a small, almost incidental part of our sense of self.  Personally, I’m inclined to put the thoughts first, and then identify myself as their origin; I identify myself not by location but by a kind of backward extrapolation to the abstract-seeming origin of my mental activity. This has nothing to do with physical space.  Of course Trehub’s system has more to it than mere location, in the special tokens used to signify belonging to me and truth. But this part of the theory seems especially problematic.  Why should simply flagging a spatial position and some propositions as mine endow a set of neurons with a sense of selfhood, any more than flagging them as Fred’s?  I can easily imagine that location and the same set of propositions being someone else’s, or no-one’s. I think Trehub means that linking up the tokens in this way causes me to view that location as mine and those propositions as my beliefs, but notice that in saying that I’m smuggling in a self who has views about things and a capacity for ownership;  I’ve inadvertently and unconsciously brought in that wretched homunculus after all.  For that matter, why would flagging a proposition as a belief turn it into one? I can flag up propositions in various ways on a piece of paper without making them come to intentional life. To believe something you have to mean it, and unfortunately no-one really knows what ‘meaning it’ means – that’s one of the things to be explained by a full-blown theory of consciousness.

Moreover, the system of tokens and beliefs encoded in explicit propositions seems fatally vulnerable to the wider form of the frame problem. We actually have an infinite number of background beliefs (Julius Caesar never wore a top hat) which we’ve never stated explicitly but which we draw on readily, instantly, without having to do any thinking, when they become relevant (This play is supposed to be in authentic costume!): but even if we had a finite set of propositions to deal with the task of updating them and drawing inferences from them rapidly becomes impossible through a kind of combinatorial explosion. (If this is unfamiliar stuff, I recommend Dennett’s seminal cognitive wheels paper.) It just doesn’t seem likely nowadays that logical processing of explicit propositions is really what underlies mental activity.

Some important reservations then, but it’s important not to criticise Trehub’s approach for failing to be a panacea or providing all the answers on consciousness – that’s not really what we’re being offered.  If we take consciousness to mean awareness, the retinoid system offers some elegant and plausible mechanisms. It might yet be that the theatre deserves another visit.

83 Comments

  1. 1. Eric Thomson says:

    The Cartesian Theater is much maligned, never refuted. Kudos to Arnold for not flinching.

    My personal view of consciousness has been much influenced by Arnold’s view, as his book suggested to me that one of the deep puzzles about consciousness, its ‘perspectivalness’, could be accounted for via a kind of indexical (here/now) representational system. It turns out that some philosophers (I believe Perry) have suggested the same hypothesis, in a more anemic philosophical context, not a fully worked-out model.

    Excellent post, incidentally.

    I think you followed things through in an interesting way. I believe you have found the right response to your concern about representations: even if it is evolutionarily strange, the evidence is strong that this is what our brains do, so it really isn’t an objection to his theory as much as an interesting question about how our brains evolved (you could similarly ask why we need extra visual cortices getting in the way between transducers and behavior–sure, it introduces new ways to be wrong, but it also adds a lot of processing power, which seems to outweigh any disadvantages that might accrue).

  2. 2. David says:

    “whether we consider these propositions true or false is determined by a linkage with a special token which indicates the truth value, while a similar linkage to I! determines whether these beliefs belong to us”

    This sounds completely implausible to me. I think it’s unlikely your brain represents beliefs as propositions, for one, but even if it did, the idea that the brain would represent beliefs independent of truth or falsity and then code whether the beliefs are T/F is silly. Why not just code “false” beliefs as the true belief that “x is false” and then you don’t need to waste space on a T/F token. Similarly, it seems silly that you determine whether you believe something by connecting it to your self-image token. I think your brain can safely assume that it believes any information it finds in itself.

    This theory proposes that essentially every single piece of information is (a) stored as a sentence and (b) connected to two different tokens, one designating that it’s true and another designating that it’s true/false for you.

  3. 3. Arnold Trehub says:

    Peter: “In Trehub’s view our beliefs are encoded by tokens in a semantic network as explicit propositions: whether we consider these propositions true or false is determined by a linkage with a special token which indicates the truth value, while a similar linkage to I! determines whether these beliefs belong to us.”

    Just a brief comment on this point. My model of the cognitive brain does not propose a neuronal token that codes for the *truth value* of any neuronal proposition. In my view a truth value can only exist for propositions in abstract formal systems. Truth or falsity are automatically determined by the logical rules of the formal system, which itself is an artifact of the cognitive brain. *Belief* (I! attached to a proposition) is another matter and can be based on many different factors — “figuring things out”, reading a trusted book, accepting the opinion of a trusted mentor, etc. I’m sure the issue will come up again in this discussion.

  4. 4. Peter says:

    Arnold – apologies, I seem to have suggested that belief and truth had separate indicators in your system, which would raise the curious possibility of a proposition I thought true but did not believe…

    But don’t your tokens do truth values? I’m looking at the bit where you say:

    …I have proposed that only neuronal propositions that are synaptically linked to the I-token and therefore accompanied by the parallel discharge of !I are true expressions of the brain states of belief (Trehub,1991). For example:
    [I can read English]I![This is true]I![This is false]
    [I can read Sanskrit][This is true][This is false]I!

    If that’s not indicating truth values I don’t think I understand what is going on.

  5. 5. Arnold Trehub says:

    Peter:

    [I can read English]I! indicates that I *believe* the statement that [I can read English]I! (attachment to I!).

    [This is true]I! indicates that I *believe* the previous statement that [I can read English]I! is true (attachment to I!).

    [This is false] indicates that I do *not believe* the previous statement that [I can read English]I! is false (no attachment to I!).

    [I can read Sanskrit] indicates that I do *not believe* that [I can read Sanskrit] is true (no attachment to I!).

    [This is true] indicates that I do *not believe* that the previous statement that [I can read Sanskrit] is true (no attachment to I!).

    [This is false]I! indicates that I *believe* that the previous statement that [I can read Sanskrit] is false (attachment to I!).

    These are examples from p. 327 in *Space, self, and the theater of consciousness*, here:
    http://people.umass.edu/trehub/YCCOG828%20copy.pdf

    What is indicated is *belief* about the truthfulness of the statements. The formal fact of the matter, the objective truth value of each statement, is not evaluated — just the *belief* about each statement. ?I hope this doesn’t add to confusion about the biological marking of belief. If it does, I’m game for another attempt at clarification.

  6. 6. Peter says:

    Arnold,

    Thanks.

    You’re drawing a distinction between belief and truth values within a formal system. But it seems to me that your arrangements for belief amount to a formal system. You have in effect a list of explicit propositions (the fact that they’re encoded in neurons makes no difference)and a binary marker of belief/no belief. That’s a formal system in itself. I suspect you also have some rules about consistency in your back pocket which unavoidably are going to resemble propositional calculus, at least.

    Your belief indicators still look like truth values to me, but let’s not make heavy weather of that. (I don’t know how ‘objective truth values work’, btw, but my advice is to patent them quickly!)

  7. 7. Eric Thomson says:

    This stuff about the proposition-system seems tangential to Trehub’s core story about consciousness. We could do away with it, and still be left with the core retinoid system/consciousness story.

    I am curious about the neologisms, i.e., ‘etinoid’, the ‘I!.’ Especially the latter, it has to be one of the most exasperating locutions I have ever seen, and must give an editor fits. Arnold did your editor try to change your mind about using that and other neologisms?

  8. 8. Peter says:

    This stuff about the proposition-system seems tangential to Trehub’s core story about consciousness.

    Yes, fair point.

  9. 9. Vicente says:

    Eric, I don’t know if it is really tangential. Once you have your image coded and stored somewhere in the brain (in the retinoid system), it is time to interprete and understand what is in it, otherwise it is useless. In this case, and taking into consideration conditions like “blindsight”, I! believe that you should consider the need for some rules, maybe as a propotional system, that allow you to do so. This interpretation stage, (despite I referred to blindsight giving you a lot of advantage) is much more related to consciousness/awareness than the mere effect of the retinoid system acting as a buffer, thanks to its autaptic components.

    Just a question (I am curious too and ignorant) why retinoid is a neologism? if you say “retinal” means of the retina, while retinoid means that has to do, is related, looks like… that’s the suffix “-oid” for, isn’t it?

    This artificial vision flavoured approaches eventually have to face the frame problem…

  10. 10. Eric Thomson says:

    Vicente: it will be helpful for Arnold to answer your first question. I am doubtful (though not sure) it is necessary to have a propositionally structured representation for consciousness to exist, so I’d be happy to cut that part of it off.

    In terms of the neologism question, sure the roots are clear, but that doesn’t mean it isn’t a neologism! Ever read David Foster Wallace :) Obviously this was just a non-substantive question I had anyway.

  11. 11. Arnold Trehub says:

    Eric and Vicente, look at p. 328, Fig. 8 in *Space, self, and the theater of consciousness*. My claim is that autaptic-cell activity within the egocentric space of the 3D retinoid above the dotted line constitutes consciousness — our phenomenal world. The autaptic neurons in retinoid space have a relatively short-term memory, so there is no long-term (lt) storage of images in the retinoid structures as such. Learning, coding, and memory (lt storage) of images happens in synaptic matrices within the brain processes shown in the boxes below the dotted line. Everything that happens in these mechanisms is at a pre-conscious or non-conscious level. Image patterns in pre-conscious mechanisms can become part of the content of the phenomenal world if they are projected into retinoid space. So Eric is right; it is not necessary to have a propositionally structured representation for consciousness to exist. But I think it is necessary to have a propositionally structured representation for semantic processing and analysis. See TCB, Chapters 3 and 6, here:

    http://people.umass.edu/trehub/thecognitivebrain/chapter3.pdf

    http://people.umass.edu/trehub/thecognitivebrain/chapter6.pdf

  12. 12. Arnold Trehub says:

    Peter: “I don’t know how “sounds sort of like it’s coming from over there” is represented in a spacial retinoid model. If we’re going on to deal with the whole world of mental phenomena, we’re clearly going to run into more problems over location – where are my meditations, my emotions, my intimations of immortality, my poignant apostrophisation of the snows of yesteryear?”

    I wonder if the question here is more about the *precision* of phenomenal location in retinoid space rather than the claim that any conscious experience is a representation of *something somewhere* in perspectival relation to us in our phenomenal world, i.e., retinoid space. For example, if I hear distant thunder while I am facing west, I might experience the sound as coming from some distance in front of me. If I am facing north, the same sound of thunder is located off to my left. If I am in the middle of the storm, each crack of thunder seems to be all around me. The point is that there is always an egocentric spatial location of the experience, either tightly focused or diffuse.

    What about meditations, feelings, poignant recollections of “the snows of yesterday”? In my conscious experience, such thoughts, feelings, and images are alway within me — somewhere in my head as inner speech and images, and vaguely located in my body as feelings. Since the envelope of my body is represented within retinoid space, the location of such ephemeral intra-personal phenomena must also be contained within this intimate space. If such personal phenomena are not experienced as being located within us, to whom do they belong?

  13. 13. Vicente says:

    Arnold:

    Up to what extent is the retinoid model compatible with other models of consciousness neurological foundation?

    For example, the Global Workspace idea in which information has to be spawned to several areas synchronously is gaining acceptance, at least as a precondition for brain conscious states activation (fMRI and EEG recordings seems to match this model quite well). The extension of the retinoid structure to this broadcasting of information to several areas doesn’t seem easy to do. Could it be possible to consider a wide(whole) area retinoid network, build with several distributed subnetworks, or rather a central retinoid hub (multilayered) receiving information of different nature, multi-sensorial, emotional, memories, etc to build a whole egocentric space.

  14. 14. Arnold Trehub says:

    Vicente: “Could it be possible to consider a wide(whole) area retinoid network, build with several distributed subnetworks, or rather a central retinoid hub (multilayered) receiving information of different nature, multi-sensorial, emotional, memories, etc to build a whole egocentric space.”

    Yes. Retinoid space *is* a central hub and a global workspace. This is just what is shown in Fig. 8 in my paper *Space, self, and the theater of consciousness*. Notice, however, that a global workspace, as such, does not constitute consciousness. For example, a Google server center is a vast global workspace but we would not consider it to be conscious. What is lacking in the Google workspace (or Baar’s global workspace) is *subjectivity* — a locus of perspectival origin within a volumetric surround, namely a core self (I!). In conscious creatures like us, our phenomenal world as it is constituted by autaptic-cell activity in our egocentric retinoid space *is* our global workspace.

    Put it this way:
    Phenomenal world/consciousness = global activity in retinoid space = global workspace with subjectivity.

    Here’s what I wrote in the *Edge* in response to a research presentation by Dehaene:

    “Stan Dehaene has done excellent work in exploring the neuronal correlates of the brain’s global workspace. But we have to recognize that what he and his colleagues are measuring are the brain changes in response to a novel perception of a previously masked object by a person who is already conscious. I agree with Steve Pinker that a global workspace is a key function of consciousness, but it is not an explanation of consciousness. In order to understand consciousness we have to explain how the brain is able to represent a volumetric world filled with objects and events from our own privileged egocentric perspective — the problem of subjectivity. This challenge is compounded by the fact that we have no sensory apparatus for detecting the 3D space in which we live.”

  15. 15. John says:

    “When I examine my own mind I find many of the most salient thoughts and feelings have no location; it’s not that their location is vague or unknown, space just doesn’t come into it.”

    Please describe these abstract ideas Peter. If there is inner speech it is between your ears and if there is love there may be a feeling in the pit of your stomach etc. Please give a clear instance of a thought or feeling that has no location.

    I have found that people who declare that they have non-located ideas (ie: ideas that occupy no space or time) have a conception of mind that is not their current experience. Are you using “mind” in this way?

  16. 16. John says:

    Peter “…If we’re going on to deal with the whole world of mental phenomena, we’re clearly going to run into more problems over location – where are my meditations, my emotions, my intimations of immortality, my poignant apostrophisation of the snows of yesteryear?”

    You seem to be jumping the gun, start with the meditations that you mention and try doing them. See Spatial modes of experience for instance. Whilst meditating listen to the inner speech of your “intimations of immortality” and notice your bodily changes and imagination that accompany the glorying in the “snows of yesteryear”. Berkeley seemed to go almost apoplectic when philosophers of his time used the “abstract idea” argument. There are no abstract ideas, I can describe real ideas but you cannot describe abstract ideas, sure, you can throw a word at me such as “immortality” but only those associations of the word that cause events in my current experience are in my current experience. Events that cannot occur in our experience are not mental events.

    Peter: “I’ve inadvertently and unconsciously brought in that wretched homunculus after all.”

    There are observations and there are theories that describe these observations. If you discover an homunculus the theory that you used to introduce it is false. You cannot then try to avoid the homunculus by dismissing the observations because a favourite theory would otherwise need to be abandoned, to do such a thing would be positively Dennettian. If you were Dennettian you would end up creating a theory of mind that consists of thoughts that occupy no time or space and a theory of perception that was pointedly dualist – see Dennettian Dualism. Why not take a scientific approach and just try to describe current experience rather than putting theory first? If you just describe your experience there are no homunculi and Arnold’s retinoid system is a distinct possibility.

    Peter, I seem to be charging you with declaring that you have ideas (ie: experiences) that are not in your experience, which is clearly impossible, and ignoring observation to preserve theory. These are harsh charges but you seem to have used these devices in your article.

  17. 17. Arnold Trehub says:

    Peter: “Moreover, the system of tokens and beliefs encoded in explicit propositions seems fatally vulnerable to the wider form of the frame problem. We actually have an infinite number of background beliefs (Julius Caesar never wore a top hat) which we’ve never stated explicitly but which we draw on readily, instantly, without having to do any thinking, when they become relevant (This play is supposed to be in authentic costume!): but even if we had a finite set of propositions to deal with the task of updating them and drawing inferences from them rapidly becomes impossible through a kind of combinatorial explosion.”

    A combinatorial explosion is avoided because all sentential propositions need not be encoded and stored as explicit propositions (word strings) in memory in order to *produce* expressions of belief. The semantic networks that I propose make logical inferences on the basis of a limited store of propositions and are able to produce, “on the fly”, proper responses to sentential queries. See, for example, *The Cognitive Brain*, Ch. 6, pp. 108-112, *Defining a Subject* and *Inferring a Subject*. In other words, our vast number of background beliefs are not actually stored in the brain in the form of a look-up table of propositions, but are automatically generated on demand by the structure and dynamics of the semantic network, so the combinatorial explosion is finessed.

  18. 18. John says:

    The frame problem does not really exist for a processor that moves objects around a model. If all you are doing is moving objects around a model of the environment the frame problem is a non-sequituur. If you make a model of your garden then move little model people around according to how the people in the garden move there is no frame problem. If you do this electronically there is no frame problem. If you mount the model in a coordinate system that models the way we model the garden you can also get awareness.

    Beliefs are possible models, we may have an infinite number but we only model them one at a time. There are no abstract ideas that transcend the dimensionality of mind.

  19. 19. Arnold Trehub says:

    In an exchange several years ago in another forum, Eric Thomson asked if the retinoid model can explain the phi phenomenon. John, on his own blog, has recently presented examples and discussed implications of the phi phenomenon for our understanding of phenomenal experience. It might be relevant to show here my response to Eric re the phi phenomenon:

    “Interesting that you should mention the Phi Phenomenon. Yes, I have looked into how the retinoid model explains this phenomenon. If you recall, in the retinoid model, selective visual attention consists in the projection of added neuronal excitation in retinoid space by selective excursions of the heuristic self-locus (HSL). Here’s how patterns of self-locus activation explain phi:

    1. When the first dot flashes on (S1), HSL moves to the spatial locus of S1.

    2. When S1 turns off and the second dot (S2) flashes on after a blank interval of ~ 30 ms up to ~200ms, HSL moves over intervening autaptic cells to the new spatial locus of S2.

    3. Over the trajectory of S1 to S2, neuronal HSL excitation plus excitation from the decaying S1 combine to create a moving trace of heightened autaptic cell activity.

    4. We see phi motion between successively flashed dots because there really is a path of moving neuronal excitation induced by the heuristic self-locus in the spatial interval between S1 and S2.”

  20. 20. Richard J R Miles says:

    Arnold, re comment 14. I am puzzled by your last sentence where you say ‘we have no sensory apparatus for detecting 3D space.’ Surely our 3D bodies are covered with touch sensors, this with our eyes with peripheral vision, focusing, hearing and smell, linked with our nature/nurture memory of experience, all hopefully operational when we are conscious, enable us to sense our 3/4D world, plus you have also described the Z plane visual side of 3D sensing. So, what do you mean?

  21. 21. Arnold Trehub says:

    Richard, what I mean is that *no* creatures, including humans, have sensory transducers that can *detect* the volumetric space they live in. Touch sensors, all of our visual sensory apparatus, audition, olfaction, and memory, can be in perfect working order, but if we had only these sensory systems and the memory of what they provide us, we would only have internal representations of events at the *proximal locus* of sensory stimulation — not a representation of the *surrounding egocentric space* in which the physical events that stimulate our sensors occur. The Z-planes of retinoid space do not *sense* the space we live in; they *constitute an innate phenomenal representation of the space we live in*. And it is into this phenomenal space of the retinoid system that our separate pre-conscious sensory signals are projected in spatio-temporal register as conscious perceptions. I hope this answers your question.

  22. 22. Richard J R Miles says:

    Arnold, thank you. Yes it does answer my question, it is in fact the only answer I could envisage that you might give. There is however the fact that we do have knowledge and memory of movement from ourselves and other things that allow us to relate and make sense of our 3/4D world, which we learn from becoming self-aware after the age of 2 or 3 years through nature/nuture and knowledge of others past and present.

  23. 23. Peter says:

    Arnold –

    Comment 12: it’s not about precision. To put it brutally, your system represents everything as a matter of location in space, but most of the contents of our mind have no location. It’s not that the location of the number 2 is unclear; location is irrelevant. I don’t believe you really think that 2 is situated in your brain, let alone that that’s the key thing about it.

    Comment 16: John, similarly, without getting bogged down in a wider discussion, would you say that the key thing when we think about the idea of immortality is to get its location pegged down?

    Comment 17: Arnold, I don’t think that’s any help. Suppose we have your system with its limited store of propositions; it encounters a bear, which generates the appropriate input. The system sets about generating some inferences (how does it do that, btw – is it using one of those formal abstract systems after all?). Let’s suppose it looks at the first proposition in its list, which might be ‘If x is a square, x has four sides’. So it infers that ‘if the bear is a square, the bear has four sides’. Not very useful, but we’re up and running, so now we might get ‘if the bear is a square, the bear has fewer than five sides’. And so on for six sides and steadily upwards. If the system proceeds in an orderly way it will carry on thinking about the potential implications of the bear being a square indefinitely, and so far we’ve barely scratched the surface of the possible implications of one aspect of the first proposition. The bear won’t wait for us to finish. The problem is that nobody knows how to generate only relevant inferences.

    Comment 18: That is perfectly true, but not at all comforting – surely Arnold wants the system to be capable of dealing with the real world, not just with its own model?

  24. 24. Arnold Trehub says:

    Peter, thanks for your comments in #23. It’s clear that clarification as well as rebuttal is needed.

    1. You wrote: “To put it brutally, your system represents everything as a matter of location in space, but most of the contents of our mind have no location.”

    Whereas my theoretical model of the cognitive brain proposes that everything in our phenomenal world (global conscious content) has a spatio-temporal location in retinoid space, it certainly does *not* propose that “everything is a matter of location in space”. Cognitive processing obviously involves much more than the spatial location of objects and events. But, crucially, my claim is that we would be unable to do the kind of thinking that we do if the objects of our thoughts were not located somewhere in the world of our phenomenal experience. In other words, a phenomenal world in which everything has an egocentric location (retinoid space) is a *prerequisite* for the other contents of our mind. In fact, most of our thinking is done in the unconscious mechanisms of the cognitive brain; it is only after the output of these mechanisms is projected into retinoid space that we consciously experience them as images, feelings, and inner speech. But when we do experience them, we always experience them as being somewhere in the world in relation to our personal locus of phenomenal origin (I!). This is subjectivity/consciousness.

    2. “It’s not that the location of the number 2 is unclear; location is irrelevant. I don’t believe you really think that 2 is situated in your brain, let alone that that’s the key thing about it.”

    I agree that the location of the number 2 is not the key thing about it. But you could not consciously think about the number 2 if it did not have a location in your phenomenal world. Do I really believe that 2 is now situated in my brain? I certainly do! Both as an image and as a propositional concept in a human invention we call “mathematics”. I also believe that 2 is situated in your brain, rather similar to my brain representations of 2. If this were not the case, how would you be able to add 2+2 and arrive at 4? The fact that 2 is situated in your brain is key only to the extent that if it were not so situated you would not be able to use 2 in any of your mental calculations.

    3. “Suppose we have your system with its limited store of propositions; it encounters a bear, which generates the appropriate input. The system sets about generating some inferences (how does it do that, btw – is it using one of those formal abstract systems after all?). Let’s suppose it looks at the first proposition in its list, which might be ‘If x is a square, x has four sides’. So it infers that ‘if the bear is a square, the bear has four sides’….If the system proceeds in an orderly way it will carry on thinking about the potential implications of the bear being a square indefinitely, and so far we’ve barely scratched the surface of the possible implications of one aspect of the first proposition…..The problem is that nobody knows how to generate only relevant inferences.”

    First off, the system does *not* “look at the first proposition on its list”. If the person encounters a bear, the brain automatically fires up all of the propositions *it has learned* that are relevant to bears in parallel (we are talking about brains, not digital computers). Relevant inferences? Which inferences will control subsequent behavior will depend on the person’s motives and the affordances of the immediate situation. For example, if the bear is encountered behind bars in a zoo, the person might stay and admire the size of the animal. If the bear is a grizzly encountered in the wilds of a National Park, the person might try to run away, or if armed, shoot it. For more about this, see *The Cognitive Brain*, Ch. 9, “Set Point and Motive: The Formation and Resolution of Goals”, here:

    http://people.umass.edu/trehub/thecognitivebrain/chapter9.pdf

    The relationship between the brain’s internal model of the world and the real world? This is an interesting question that we can discuss later.

  25. 25. Eric Thomson says:

    Peter are you suggesting that you have had experiences that are nonspatial? I have never had one, to my knowledge.

  26. 26. Vicente says:

    Peter:
    would you say that the key thing when we think about the idea of immortality is to get its location pegged down ?

    Definitely not, but I think that we might be confussing two issues, one is the location of ideas (phenomenally), the hard problem, and the other one is the Kantian proposition that states that our minds cannot manage ideas that are not underpinned by space and time as fundamental categories.

    Is it possible to bear in mind the abstract concept of immorality “per se”? or do we need to decompose it in a large set of immoral theoretical real cases that support the concept? so that we talk about an immoral behaviour, an immoral act (always according to agreed rules of judgement). So “immorality” would be the abstract summary term that holds the infinite number of possible immoral cases, all of which happen in space, and we consider them in a spacial scenary. I really don’t know. I think that as far as language is concerned, we just use the “word”, but for reasoning we have to appeal to concrete cases (in space). If you think of the idea of “huge quantity” per se, you’ll find that it is meaningless until you apply it to an specific case, in that moment you’ll see how your mind makes concrete objects to appear in space.

    Eric:

    some experiences, some emotions, can be spaceless, sometimes an instant of deep bliss has no time, and no space… but in general, for reasoning and life navigation, I think Kant is right.

  27. 27. Peter says:

    Arnold:

    my claim is that we would be unable to do the kind of thinking that we do if the objects of our thoughts were not located somewhere in the world of our phenomenal experience

    Surely the retinoid system records location in physical space, not in ‘the world of our phenomenal experience’?

    when we do experience them, we always experience them as being somewhere in the world in relation to our personal locus

    That’s what I emphatically deny. It seems to me that someone whose objects of consciousness were restricted to those with a spatial location would be suffering a catastrophic impairment.

    you could not consciously think about the number 2 if it did not have a location in your phenomenal world

    Actually, that’s the only way I can think about it. You think it’s in your head? Is it in mine too? Do we have different 2s? Do pairs of things in the world not have any real objective property of duality?

    If this were not the case, how would you be able to add 2+2 and arrive at 4?

    How do you do it? I ask what two plus two is and the answer from the retinoid comes back: it’s located in your brain. That’s apparently all it can tell me.

    First off, the system does *not* “look at the first proposition on its list”.

    Then how does it address them?

    the brain automatically fires up all of the propositions *it has learned* that are relevant to bears in parallel

    Automatically? How does it know which ones are relevant? Every inference in the infinite sequence I described specifically mentions bears. We can see that they’re all useless, but how does your system tell?

    For example, if the bear is encountered behind bars in a zoo, the person might stay and admire the size of the animal.

    It’s not enough to give an example of relevant behaviour (I know what it’s like), you have to show how your system generates it.

  28. 28. Peter says:

    Eric:

    As a tu quoque, are you suggesting you’ve never thought about anything that doesn’t have a spatial location?

  29. 29. Peter says:

    Vicente:

    I don’t want to get bogged down in what is a much wider debate: but I don’t myself believe our minds cannot manage ideas that are not underpinned by space and time as fundamental categories. It might arguably be so in our perception of the the spatial world, but I don’t think, for example, that mathematical concepts are reducible to the conjunction of all the physical cases that instantiate them. 2+2=4 is not an empiricial generalisation.

  30. 30. Eric Thomson says:

    Peter every experience I can remember) has some spatial content. I’m not saying this fixes the meaning, but just making a phenomenological claim. I am *not* saying we should identify the referent of ‘1+1=2′ with my phenomenology when I consciously entertain that equation, but just that said phenomenology has spatiality to it.

    Perhaps there are clear exceptions to what I’m saying. But even thinking about nonspatial phenomenology seems to be going on around my head.

    The problem is taking this as the key, or sole, determinant of meaning. This led the introspectionists down a horrible alley (or so we are told).

    At any rate, I think Arnold could just be agnostic about phenomenology of highly abstract contents. For basic consciousness, like you find in your dog, this Kantian claim seems roughly right. But I would be happy and willing to be wrong here! I’m not all that wedded to this, but just saying what things seem like to me subjectively.

  31. 31. Peter says:

    If you mean every sensory experience you remember has a spatial element, I can live with that (albeit still a little worried about the smells and the noises). For consciousness-as-awareness the retinoid theory might work well. But general consciousness goes well beyond things situated in physical space. A few minutes ago I was sitting here wondering whether a renewed dose of inflation might not be good for the economy in current circumstances and wishing I understood economics: I really can’t see that awareness of any spatial locations was involved.

    I suppose if you were going to be dogmatic about it, you might say something like: well, inflation is about raised prices, and some of the things the prices apply to do have physical locations (although we don’t know what they are and they make no difference). But that seems a pretty desperate resort. Or you could fall back on saying that inflation and the rest is really in your head. To be honest I’m not sure how much better that is.

  32. 32. Peter says:

    Incidentally, an interesting counterpoint to this discussion might be provided by Colin McGinn’s contention that the non-spatial character of consciousness is what makes it mysterious. I briefly discussed this back in 2006.

  33. 33. Arnold Trehub says:

    Peter: “Surely the retinoid system records location in physical space, not in ‘the world of our phenomenal experience’?”

    The retinoid system does indeed represent location in the world of our phenomenal experience. The fact that we can successfully pick up objects and navigate in the surrounding space of the physical world attests to the fact that evolution has done a pretty good job of making our phenomenal world correspond in essential ways to features of the physical world. But the existence of illusions also attests to the fact that correspondence between our conscious/phenomenal world and the real physical world is not always tight. A good example of this that is commonly experienced is the moon illusion.

    Peter: “It seems to me that someone whose objects of consciousness were restricted to those with a spatial location would be suffering a catastrophic impairment.”

    There might be some confusion here. I am explicitly referring to the location of the *conscious experience* of objects and events, not to the spatial location of objects in the physical world. For example, a griffin does not really exist in the physical world (has no location), but I can imagine a griffin and think about its features only if I imagine it and its features located somewhere in my phenomenal world. If its features were *nowhere* in my phenomenal world, it could not be an object of my conscious thought. As you read these words, they are in front of you in your egocentric phenomenal space. If you later consciously think about them, they will be experienced as inner speech and possibly as images somewhere in your head. Most of the cognitive processing, however, will be done in the pre-conscious and non-conscious mechanisms of your cognitive brain. Recurrent interaction between the synaptic matrices of these subconscious neuronal mechanisms and the active content of your retinoid space provides the stream of the extended present in your phenomenal world.

    Peter: “You think it’s [a 2] in your head? Is it in mine too? Do we have different 2s? Do pairs of things in the world not have any real objective property of duality?”

    Yes, I think a neuronal representation of 2 is in my head and also in your head. There might be minor individual differences in our two representations, but they share essential properties. If this were not the case, neither of us would be able to do simple calculations in our head.
    I guess that experience teaches us that pairs of things in the world have the objective property of duality.

    Arnold: “First off, the system does *not* “look at the first proposition on its list”.

    Peter: “Then how does it address them?”

    Arnold: “the brain automatically fires up all of the propositions *it has learned* that are relevant to bears in parallel”

    Peter: “Automatically? How does it know which ones are relevant?”

    I explain how relevance is determined in my book *The Cognitive Brain*, Ch. 9, “Set Point and Motive: The Formation and Resolution of Goals”, here:
    http://people.umass.edu/trehub/thecognitivebrain/chapter9.pdf

    Peter: “It’s not enough to give an example of relevant behaviour (I know what it’s like), you have to show how your system generates it.”

    I agree. Thats why I wrote about motivation in TCB, Ch. 9.

  34. 34. Arnold Trehub says:

    Peter: “Incidentally, an interesting counterpoint to this discussion might be provided by Colin McGinn’s contention that the non-spatial character of consciousness is what makes it mysterious.”

    My SMTT experiment provides strong empirical evidence that McGinn’s contention that consciousness is non-spatial is simply wrong.

  35. 35. Peter says:

    Arnold,

    I don’t like monopolising the comments, but your patience deserves a further answer.

    I think you’ve got a problem if the retinoid represents location in phenomenal space, because doesn’t it constitute phenomenal space? That would mean it represents itself. Surely the model you’re talking about represents the physical world? If it doesn’t I really have missed the point comprehensively.

    I can quite well think about a griffin without assigning it any kind of location, whether real or phenomenal. I did so in writing that last sentence. What location did you attribute to the griffin when you read that? (NB – It’s too late to summon up a mental picture of the beast now.)

    I think a neuronal representation of 2 is in my head

    Yes, but I’m not talking about a neuronal representation of 2, I’m talking about the number 2 itself. Where is it?

    TCB, Ch.9 is interesting stuff, but I don’t see where it tells me how your system decides which of its stock of propositions to address, which inferences to draw, and which to neglect without examination.

    My SMTT experiment provides strong empirical evidence that McGinn’s contention that consciousness is non-spatial is simply wrong

    It provides strong empirical evidence (I’d be content to say it proves) that the brain does indeed do spatial modelling, and that that modelling feeds into our conscious experience – but not that spatial modelling constitutes consciousness.

  36. 36. Eric Thomson says:

    Peter perhaps you just have a very different phenomenology than I do. When I smell things, there is a spatial location to the smell (roughly around my nose), and when I hear things, I hear them as happening ‘out there’, not as some unlocalized tone.

    That we are talking past each other is clear:
    “Yes, but I’m not talking about a neuronal representation of 2, I’m talking about the number 2 itself. Where is it?”

    This shows the confusion. I am talking about the *phenomenology of thinking about the number 2*, not the number 2 itself. I tried to be clear above about this crucial distinction, but I failed. When I have a phenomenology associated with my mathematical thinking, there is a spatial component.

    I’m not saying the number 2, the abstract object, is in space. These are different things. Phenomenology of seeing a dog is not a dog. Phenomenology of thinking about ‘2’ is spatial, even though ‘2’ is not in space. This is a crucial distinction.

    If anyone can guide me, phenomenologically, to an *experience* that has no spatial component (not the *object of experience* such as the number 2, but the experience of thinking about that object), then I will be grateful.

    It’s as foreign to my phenomenology has having an experience that has no temporal aspects. The number 2 has no temporal component, but phenomenologically my thoughts about 2 have a temporal aspect, just like they have a spatial aspect.

  37. 37. Eric Thomson says:

    Most discussions of whether consciousness occurs “in space” need to distinguish contents from vehicles (I discussed here).

    In the manuscript I’m writing up, I have some great quotes from the 19th century making fun of materialism by saying that if it is true, then your thoughts must have a top, bottom, and west side. :) Pretty funny.

  38. 38. Vic P says:

    Chorus:
    We built this city, we built this city on retinoidal space
    Built this city, we built this city on retinoidal space

    Say you don’t know me or recognize my face (fusiform)
    Say you don’t care who goes to that kind of place
    Knee deep in the hoopla sinking in your fight
    Too many runaways eating up the night

    Marconi plays the mamba, listen to the radio, don’t you remember
    We built this city, we built this city on retinoidal space

    Chorus:
    We built this city, we built this city on retinoidal space
    Built this city, we built this city on retinoidal space

    Someone always playing corporation games
    Who cares they’re always changing corporation names
    We just want to dance here someone stole the stage
    They call us irresponsible write us off the page

    Marconi plays the mamba, listen to the radio, don’t you remember
    We built this city, we built this city on retinoidal space

    We built this city, we built this city on retinoidal space
    Built this city, we built this city on retinoidal space

    It’s just another Sunday, in a tired old street
    Police have got the choke hold, oh then we just lost the beat

    Who counts the money underneath the bar
    Who rides the wrecking ball in to our rock guitars
    Don’t tell us you need us, ‘cos we’re the ship of fools
    Looking for America, coming through your schools

    (I’m looking out over that Golden Gate bridge
    Out on another gorgeous sunny Saturday, not seein’ that bumper to bumper traffic)

    Don’t you remember (‘member)(‘member)

    (It’s your favorite radio station, in your favorite radio city
    The city by the bay, the city that rocks, the city that never sleeps)

    Marconi plays the mamba, listen to the radio, don’t you remember
    We built this city, we built this city on retinoidal space

    We built this city, we built this city on retinoidal space
    Built this city, we built this city on retinoidal space
    Built this city, we built this city on retinoidal space
    Built this city, we built this city on retinoidal space

    (We built, we built this city) built this city (We built, we built this city)

    “We Built This City” is the title of a song written by Bernie Taupin, Martin Page, Dennis Lambert, and Peter Wolf, and originally recorded by the American pop rock group Starship and released as its debut single on August 1, 1985.

    If consciousness is inherently ungrounded like a glass of water, then retinoidal space is the vortex which appears in the water. The being needs retinoidal space in order to make constant judgements about its environment. When the being reaches safe ground, it ungrounds this space or enters into the good.

  39. 39. Vicente says:

    Eric: “Phenomenology of seeing a dog is not a dog”

    To me that is precisely a dog. It there is no observer for the “….” that assigns to it the concept of dog in its mind, there is no dog, let alone dog “class”.

    Then, what are the real objects irrespectively of our mind, ah, good question.

    Peter: The retinoid system could contain itself, like a computer can run a simulation of itself as part of a larger scope simulation. The retinoid system could be recursive. I mean as a representational architecture and system for the outer world.

    Regarding abstract concepts, say “goodness”, can you manage it without eventually having to make use of concrete stances that always involve space. I can’t, and I feel curious, could you please provide a short and simple example of the use of “2” or “goodness” or “democracy” without involving space.

  40. 40. John says:

    Peter, you seem to be confusing conscious experience with all possible processing and forms anywhere in the universe. Take for instance an antineutrino, I can probably never have an electron antineutrino in my conscious experience but I can perform the maths on neutron decay and surmise the properties of the measuring equipment needed to test these predictions. The fact that I might imagine a neutron as a little cloud and an electron antineutrino as a tiny little cloud does not stop me from deeply complex speculations. The maths is just a skill that has intermediate results “pop” into mind. I know that a neutron is nothing like a cloud really, the cloud is just a placeholder, a form in my experience that substitutes for reality. Like all of my experience.

    Take immortality. I cannot contain a millennium of time, my experience is limited to the time that I have in my extended present but, as for the antineutrino, I can make a representation of the possibility of immortality by, for instance, imagining a line that has 2011, 3011, 4011 etc marked at equal intervals with a band marked in red on the line that is my currently expected lifespan. I can then compare the red band with the line and get an intimation that immortality is much larger than the band.

    There are no abstract ideas. An “idea” in the philosophy of mind is any event such as an image, inner speech, feeling, emotion etc. that occurs in experience. An abstract idea would be such an event that is outside of experience. There are no experienced events that are outside of experience. Ideas can be used to predict events that are outside of experience (ideas such as the cloud or the time-line) but the form of ideas is always firmly within experience. This was all argued to death in the eighteenth century – look up Bishop Berkeley.

  41. 41. John says:

    ps: Peter, you said above that “…Arnold wants the system to be capable of dealing with the real world, not just with its own model”. The model contains sufficient degrees of freedom to model the real world, even if, at times, we need to use clouds (areas) and time-lines (linear forms), cut down 3D representations of hyperspace etc. to do this modelling. It is a fortunate effect of the anthropic principle that our minds and the real world have much in common geometrically and geometry is the stuff of models.

  42. 42. Peter says:

    Let’s spool back here a minute.

    Arnold proposes that patterns of retinal activity are preserved and procesed by ‘retinoid’ structures which model the spatial relations of objects in the world in relation to us, the observer. That retinoid model constitutes the contents of our conscious thought.

    The objection I’ve made is that the retinoid system gives everything a location in space. However, many of the things we think about or are conscious of have no location. This includes numbers such as 2, and abstract entities such as economic inflation or cognitive science.

    I can see two possible reasonable rejoinders. One is that in fact the retinoid system only deals with sensory awareness, not with general consciousness, so it only ever deals with things that (more or less) have locations. The second is that I’ve misunderstood and in fact the system doesn’t always attribute location to things – perhaps there are non-spatial retinoids to deal with abstract concepts, for example. That would require some explanation, but it seems viable enough.

    A third possible rejoinder, however, is to say: why yes, everything we can possibly think about has a distinct physical location. If we’re thinking about 2+2=4 we have to know where those twos are. We can’t think about philosophy without deeming it to be, say, three feet to the north of us – or inside our personal crania. To me, this would seem a pretty hopeless line to take.

  43. 43. Arnold Trehub says:

    Peter, you say: “The objection I’ve made is that the retinoid system gives everything a location in space. However, many of the things we think about or are conscious of have no location.”

    I think this is the crux of the misunderstanding. The retinoid system does not give *everything* a location in space. The retinoid system gives every *instance of conscious content* a location in retinoid space. If I read the word “griffin” on my computer screen, I have a conscious experience of the printed word located about ten inches out there on the screen in front of me. If I reflect on what a griffin is like, I have a conscious image of a beast with an eagle’s head and a lion’s hind quarters roughly located somewhere behind my eyes. The retinoid system does not give the griffin, as such, a location in space; it gives my *conscious image* of a griffin a location in space. Notice however, if griffins really did exist, and I were looking at a real griffin, my conscious experience of the griffin would be located somewhere out there in front of me in my phenomenal world, which I claim is the global pattern of autaptic-cell activity in my brain’s retinoid space. Does this help clarify things?

  44. 44. Peter says:

    Arnold,

    I think we might be getting to the limits of clarification.

    The entities I’ve mentioned are not located in any kind of space: to speak of their spatial location is a category error. It doesn’t matter whether we’re speaking of real space, retinoid space, or phenomenal space (unless you’re going to tell me that these latter spaces are purely metaphorical, instead of models of real space – but I don’t think you are).

  45. 45. Arnold Trehub says:

    Peter: “The entities I’ve mentioned are not located in any kind of space …”

    Do you mean that the entities you mention are located in some kind of Platonic World of ideal forms?

  46. 46. Arnold Trehub says:

    Peter, would you claim that your conscious thoughts about economic inflation have no location in space?

  47. 47. Michael Baggot says:

    I think that peter is making a very important point here about a nonphenomenal or mnemic space that is represented most notably in humans. This space contains mental images which derive from memory that is inspired by either external phenomenal imagery or by the manipulation or examination of already existing mnemic images. Surely this is a separate non retinoid, propositional space that the brain uses solely for internal cognitive communication and recall. Oh where, oh where has my retinoid gone.

  48. 48. John says:

    Peter: “The objection I’ve made is that the retinoid system gives everything a location in space. However, many of the things we think about or are conscious of have no location. This includes numbers such as 2, and abstract entities such as economic inflation or cognitive science.”

    Sure, you can think ABOUT undiscovered planets or planets that do not exist now, these have no definite location in the physical world but when you inspect your ideas, your current experience, everything does have a location within that experience. When thinking about undiscovered planets you imagine, perhaps, earth-like orbs or cosmic maps but these are events with locations in your experience.

    You keep making this statement that “many of the things we think ABOUT have no location”. Yes, obviously, but when we think about them the thoughts are clearly located. There are no abstract ideas. Several people have raised this objection above but you have not answered it. Please tell Arnold and I about these events in your experience that occupy no time and no space in your experience, these abstract ideas, please describe just one of them.

    Michael: “This space contains mental images which derive from memory”. Shut your eyes and allow the persistence of vision to subside. Now imagine your computer screen at its current location. Reach out and touch the screen in your imagination. You will notice that the imagined and perceptual screens almost overlap each other and share the same space – you can even reach out into the imaginary space to touch the imaginary screen! See The spatial modes of experience. There seems to be but one space of experience.

  49. 49. Arnold Trehub says:

    Michael: “I think that peter is making a very important point here about a nonphenomenal or mnemic space that is represented most notably in humans. This space contains mental images which derive from memory that is inspired by either external phenomenal imagery or by the manipulation or examination of already existing mnemic images. Surely this is a separate non retinoid, propositional space that the brain uses solely for internal cognitive communication and recall.”

    I am in full agreement with you that our brain has such a non-phenomenal space. But surely, *by definition*, events in such a space are *not* our phenomenal experiences. According to my theoretical model of the cognitive brain, the images and propositions in this pre-conscious “mnemic” space are given by the patterns of neuronal activity on mosaic-cell arrays and class cells in synaptic matrices and semantic networks. It is only after the output of these non-conscious brain mechanisms is projected into the egocentric space of our phenomenal world — retinoid space — that these non-phenomenal excitatory patterns become part of our conscious experience. This is why conscious experience of our perceptions and decisions lag their completion in the non-conscious part of the brain.

  50. 50. Vicente says:

    Arnold, according to cmmt #49 it is just the architecture (networking) and the physiological properties (autaptic cells) of the retinoid system that are responsible for enabling consciousness.

    Then my question is: if we could build a system (HW + SW) that replicates the retinoid system and necessary I/O functions, would it be somehow conscious IYO?

  51. 51. Peter says:

    Arnold:

    Do you mean that the entities you mention are located in some kind of Platonic World of ideal forms?

    That would be one view of the matter, but I’m not taking any stance on what the correct answer is, just pointing out that it isn’t spatial location.

    Peter, would you claim that your conscious thoughts about economic inflation have no location in space?

    Yes, that’s right.

    John;

    There are no abstract ideas

    Of course there are. In fact the denial of the existence of abstract ideas is itself an abstract idea, so it’s sort of neatly self-refuting.

    Please tell Arnold and I about these events in your experience that occupy no time and no space in your experience, these abstract ideas, please describe just one of them.

    I’ve never mentioned an experience that occupies no time. As for entities that have no physical location, I’ve already mentioned several: the number 2, economic inflation, griffins, to name but three. Thoughts, stories, theories; none has a location. Do you think our thoughts consist of lists of physical objects?

    I feel I’ve said too much in this discussion already, so please excuse me from repeating myself any further.

  52. 52. John says:

    Peter, I cannot understand why you refuse to separate current experience from possible information.

    The number 2 on this screen is a geometrical form in my current experience that may have associations that cause events in my current experience. It is not an abstract idea. The same goes for the mental image of a griffin, it is an imaginary entity but imagination is real and located so a griffin is not an abstract “idea”. Griffins are not found in the world but are found in the imagination. Certainly there are objects that are not in my current experience but these are not present ideas.

    Thoughts, as inner speech, are clearly located somewhere between my ears in my current experience. They tend to just pop into my experience. Theories involve inner speech and imagined geometrical forms, stories are tales recounted by other people and originate at their lips in current experience and cause feelings and imaginings in current experience. None of your examples are events without locations.

    Do you think that thoughts do not exist? There are no events that have time but no spatial extent. It seems as if you are labouring under a theory of experience that does not fit the form of experience or even the form of the world at large.

    You ask: “Do you think our thoughts consist of lists of physical objects?”. I have not mentioned lists or physical objects, I have described thoughts, perceptions and imaginings as they are when they occur: forms in current experience (where “current” includes the extended present moment). Just look, feel and listen, that is them. Whether these events are physical objects is a different question.

  53. 53. John says:

    Peter, I must confess to being hugely perplexed by your denial of space in experience but confirmation of time. More than one thing at an instant is space, by definition. You cannot have time without space because even the smallest of things is multiple. The only issue is whether the spacetime of experience is physical.

  54. 54. Arnold Trehub says:

    Vicente: “Then my question is: if we could build a system (HW + SW) that replicates the retinoid system and necessary I/O functions, would it be somehow conscious IYO?”

    I don’t know. I suppose it depends on what you mean by “replicates”. The putative retinoid system that I describe is composed of a particular kind of network/architecture of living neurons with all of their internal biophysical properties and local electromagnetic fields. If a replication of this kind of network included these component properties I would guess that it would be conscious. But to my knowledge, there is no existing artifact that has a volumetric representation of 3D space with a fixed locus/coordinate of origin. Do you know of any artifact of this kind?

  55. 55. VicP says:

    Arnold,

    I enjoyed Monday’s Charlie Rose’ Brain Series program on Consciousness with Patricia Churchland and others:
    http://www.charlierose.com/view/interview/12025

    Particularly they touched on the unconscious or preconscious experience and how this selectively broadcasts into the global workspace. I would say that your retinoidal space structure would fit nicely into the discussion.

  56. 56. Tom Clark says:

    Arnold:

    “But to my knowledge, there is no existing artifact that has a volumetric representation of 3D space with a fixed locus/coordinate of origin. Do you know of any artifact of this kind?”

    Perhaps you know of this artificial system that reportedly has an explicit internal self-model, described by Metzinger at http://www.humanamente.eu/PDF/Issue14_Paper_Metzinger.pdf starting on p. 35. Obviously not conscious, but perhaps a precursor to a conscious system according to your way of thinking.

  57. 57. Arnold Trehub says:

    Tom, I know about the robot that Metzinger describes. The machine uses the output from actuation sensors to gradually learn about its own structure and motion. It develops a kind of internal model of its functional characteristics. But I don’t think it can be described as a *phenomenal* self model (PSM) since it lacks the global *subjective* perspective of being centered in a surrounding space/world.

  58. 58. Arnold Trehub says:

    VicP, thanks for that very interesting link. Retinoid space is a global workspace with the additional crucial property of subjectivity. No subjectivity, no consciousness. I was surprised that the concept of subjectivity was not raised in the discussion.

  59. 59. Tom Clark says:

    Arnold:

    “But I don’t think it can be described as a *phenomenal* self model (PSM) since it lacks the global *subjective* perspective of being centered in a surrounding space/world.”

    Yes, Metzinger also says in that paper that getting to the phenomenal self-model requires something more, and footnotes his book and other papers. This raises the question of what additional functional characteristics entail phenomenal states, and why. Metzinger, although he takes subjectivity to be a central aspect of consciousness, says that subjectivity isn’t necessary for having phenomenal states: we can have experience without the experience of being a self undergoing the experience – so called oceanic experience.

    Metzinger suggests at http://www.philosophie.uni-mainz.de/Dateien/beingnoone2.pdf (p. 3) that a system has to meet 3 constraints to be minimally conscious, but that such consciousness would lack “rich internal structure,…the complex texture of subjective time, [and] the perspectivalness going along with a first-person point of view.”

    Of course the self-experience is normally part of being conscious, and a central element of it, but it sounds as if on your theory subjectivity is a necessary condition for having experience.

  60. 60. Arnold Trehub says:

    Tom: “… it sounds as if on your theory subjectivity is a necessary condition for having [a conscious] experience.”

    Yes, that’s right. See, for example, Fig.1 in *Evolution’s Gift: Subjectivity and the Phenomenal World*, here:

    http://evans-experientialism.freewebspace.com/trehub01.htm

    Tom: “Metzinger, although he takes subjectivity to be a central aspect of consciousness, says that subjectivity isn’t necessary for having phenomenal states:”

    I’m not sure about this. For example in the article you cite, Metzinger explicitly writes “Phenomenologically, minimal consciousness is described as the presence of a world.” And in his recent Scholarpedia article about the Self, he writes “The functional basis for instantiating the phenomenal first-person perspective can be seen as a specific cognitive achievement: the ability to use a centered representational space (Trehub 1991, 2007, 2009).”

    My claim is that we cannot have the minimal conscious state of the *presence of a world* without activation of a transparent brain representation of a locus of origin, the core self (I!), within a global volumetric space (retinoid space). This is subjectivity and this minimal conscious state corresponds to the most primitive level of consciousness, C1, in my paper *Space, Self, and the Theater of Consciousness*. It seems to me that the above two quotes from Metzinger suggest that he might agree with me that subjectivity is necessary for having phenomenal states. Notice that the core self (I!) is different from the phenomenal self model (PSM) and, in fact, is a necessary *precondition* for constructing a phenomenal self model.

  61. 61. Arnold Trehub says:

    Here’s an interesting article relating to the self-locus:

    http://ow.ly/7SQGn

    I discuss the out-of-body experience in relation to retinoid space in a paper now in press in JCS. An early draft of the paper can be seen here:

    http://www.people.umass.edu/trehub/where-am-i-redux.pdf

  62. 62. Vicente says:

    Arnold,

    living neurons with all of their internal biophysical properties and local electromagnetic fields

    This is not very coherent with your theory… these biophysical properties are already considered as part of the physical infrastructure required to achieve the logical circuitry(information processing)layout of the retinoid system, and it is this logical layout the one that supports consciousness.

    Why then did you mention that we should consider the underlying biophysical properties to build our technological model of the putative retinoid system?

    Could it be that your subconscious is reluctant to accept that just logical architectures are the building blocks of consciousness….

  63. 63. Arnold Trehub says:

    Vicente, a logical architecture, as such, is no more than a formal description. It is not a physical mechanism with interrelated physical components. A logical architecture can’t constitute consciousness any more than the circuit diagram of a radio can be a working radio. Maybe I don’ understand what you are suggesting.

  64. 64. Vicente says:

    Arnold, I’m suggesting that in your theory once the processing and representational features of the retinoid system are defined, the underlying biophysical mechanisms and structures that enable that information processing are irrelevant. Since you say that consciousness emerges in the retinoid system as a result of the way it processes and represents sensorial information (for which of course there is a real circuitry implemented in the brain), being the I! locus, the capacity to provide subjectivity the most important function. Then, any system, artificial or not, that mimics or replicates those features, should produce conscious states, irrespective of having Z-planes build with proteins or silicon…ultimately.

    Nevertheless, I could think that there are some functional requirements to be met. For example, the processing speed has to reach a certain threshold in order to achieve the conscious effect, and this requirement can only be met by the processing power of brain tissue. I don’t see this is the case.

  65. 65. Arnold Trehub says:

    Vicente: “Then, any system, artificial or not, that mimics or replicates those features [of the retinoid system] , should produce conscious states, irrespective of having Z-planes build with proteins or silicon…ultimately.”

    “Ultimately” is the devil’s detail. My theoretical claim is that physiological activity in a system of *living neurons* with the specified structure and dynamics of retinoid space in a retinoid system constitutes consciousness. Would an artificial retinoid space, built of silicon, with *exactly the same structure and dynamics* of the protein-based mechanism produce conscious states? My guess would be that *in principle* it would. But I would wonder about the replication at the component level. How could the properties of a silicon “neuron” be exactly like the properties of a biological neuron? In the final analysis this is an empirical problem. If a dynamic silicon based mechanism with a volumetric space having a fixed coordinate of spatial origin could succeed in the kinds of empirical tests as well as the retinoid model, then we might believe that it is conscious. Science is a pragmatic enterprise.

  66. 66. Tom Clark says:

    Arnold:

    “But I would wonder about the replication at the component level. How could the properties of a silicon “neuron” be exactly like the properties of a biological neuron?”

    Seems to me a complete theory of consciousness would be able to specify which component properties and activities at various levels are necessary and sufficient to entail the existence of phenomenal states for the instantiated system, and say why. That is, if we know exactly why the biologically instantiated retinoid system entails consciousness, then we’d know what sort of silicon or other sort of system would necessarily have it too – there would be no ambiguity. At the moment, however, the explanatory gap seems to remain, at least for many of us.

    Btw, thanks for pointing out that Metzinger cites you. The “presence of a world” may well require a centered representational space, and thus subjectivity in that sense, but subjectivity in the experienced sense of self to whom experience is (apparently) presented might be a further accomplishment of consciousness, which is where Metzinger’s *self-model* theory of subjectivity comes in. As you say, “Notice that the core self (I!) is different from the phenomenal self model (PSM) and, in fact, is a necessary *precondition* for constructing a phenomenal self model.”

    Lastly, having had some out of body experiences myself (great fun, I recommend them and lucid dreams to everyone), I look forward to your JCS article!

  67. 67. Arnold Trehub says:

    Tom: “Seems to me a complete theory of consciousness would be able to specify which component properties and activities at various levels are necessary and sufficient to entail the existence of phenomenal states for the instantiated system, and say why…. the explanatory gap seems to remain, at least for many of us.”

    The point is that an “explanatory gap” exists for *all* fields of scientific theory, not just for our theories of consciousness. There is no aspect of nature that is completely known. In the theory of quantum mechanics, we don’t understand why observation should collapse superposed probabilities to give a measured result. In classical physics, we don’t undertstand why electron flow through a conductor generates a magnetic field. Scientific understanding is always a work in progress. Why should we single out a scientific theory of consciousness for failing to provide a *complete* explanation when there is no *complete* theoretical explanation in any other field of science?

  68. 68. Arnold Trehub says:

    Peter: “I can flag up propositions in various ways on a piece of paper without making them come to intentional life. To believe something you have to mean it, and unfortunately no-one really knows what ‘meaning it’ means – that’s one of the things to be explained by a full-blown theory of consciousness.”

    In retinoid theory, if you sincerely believe that the sun will rise tomorrow, the neuronal representation of your statement/thought “the sun will rise tomorrow” is synaptically coupled to I!. Like this: [the sun will rise tomorrow]I! This attachment can only happen in your brain, and this is what ‘meaning it’ means for you. I have the same state of belief about the sun rising tomorrow because the same kind of neuro-propositional attachment to my I! happens in my brain. What other brain marker of personal belief seems more credible to you? Or would you argue that there is *no* brain marker of personal belief?

  69. 69. Vicente says:

    Arnold, considering the three following statements:

    1) A conscious being doesn’t understand the nature of matter.

    2) Matter doesn’t understand the nature of matter.

    3) The fundamental particles (mental concepts) arranged themselves in: atoms, molecules, stars, planets, cities, cosmetics, computers, cars, telescopes, living stuff, newspapers, satellites, internet blogs to discuss these issues….

    The explanatory gap in the case of consciousness goes far beyond the lack of complete theories for many other fields of science. As I see it, there is an infinite breach between psychology and physics, with the brain (based on life) as the most weird bridge ever, why? I haven’t got a clue.

  70. 70. Arnold Trehub says:

    Vicente: “The fundamental particles (mental concepts) arranged themselves in: atoms, molecules, stars, planets, cities, cosmetics, computers, cars, telescopes, living stuff, newspapers, satellites, internet blogs to discuss these issues….”

    I would suggest a different sequence: … atoms, molecules, planets, living stuff; then *brains*, then *consciousness*, then *science*, then internet blogs to discuss these issues.

    Vicente: “The explanatory gap in the case of consciousness goes far beyond the lack of complete theories for many other fields of science.”

    I agree that the explanation of consciousness is trickier than explanation in other fields of science, but I don’t see the “explanatory gap” as essentially different. Both standard physical theory and any theory of consciousness are inventions constructed by the human brain which is a relatively recent product of evolution. So the problem of a conscious cognitive brain is at the root of both kinds of theory. Notice the conundrum. Consciousness is an evolutionary adaptation, but the scientific concepts of evolution and consciousness are themselves products of consciousness. Which comes before the other? For me, it seems like the two tracks of a mobius strip — an endless loop.

  71. 71. VicP says:

    Breaking the mobius strip:

    If we divide the human person into systems of feelings we have fixed nerve structure or locked system of feeling in the body or central nervous system which we call being. Likewise we have fluid or variable marvelous systems of feeling in the brain which we call consciousness.

    So now we need to ask what causes feeling?

    As the Bible says that which is hidden will be revealed and that which is revealed will be hidden. The underlying function of adjacent neurons is Victor Panzica’s Supersize Theory or that when adjacent neurons fire they lock nuclei function or supersize their structures which forms qualia.

  72. 72. Kar Lee says:

    I am late into this thread of discussions.

    Peter, you have defended your position very well, bravely fending off attacks from all spatial directions. :)

    John, I would like to go back to the spatial locations of ideas and thoughts. You said, “Thoughts, as inner speech, are clearly located somewhere between my ears in my current experience.” But a thought does not necessarily take the form of an inner speech. Say if you have an appointment at 5, and you suddenly glance at the clock and it is 4:48. You immediately “know” that you are going to be late. This feeling of knowing something, I will claim, is not in anyway related to any concept of space. You may want to claim that it is inside your body, but that is just your inference. People had mistakenly believed they thought with their hearts too. Now that we all know we think with our brains, so you can now claim your thoughts must have occurred between your ears. But before the fact that the brain is for thinking was established, you might just as well have claimed that your thoughts occurred in your chest. The point is, to claim that a thought must always realized in some sort of spatial medium or location is meaningless. You can imagine yourself typing your thoughts out in complete English sentences on a computer screen, or dictating it to a recorder, but this is an unnecessary step. A thought comes like a flash and it does not need to feel like it comes through any physical realization. Some thoughts appear like a self conversation (by the way, is it always a male voice for you?) Some thoughts come as an image (you can visualize how to solve a partial differential equation by working on the visual image of an equation, is it on a blackboard by the way, or is it usually a piece of white paper, or a paper of unknown color?). See, before I ask these kinds of questions, you may not even have thought about these attributes. Before you give them these attributes, they did not have attributes. Same goes for the spatial location attributes.

    To insist that an idea must have all kinds of physical attributes is like insisting a sound has color, or a smell has frequency, a line has volume, or an image has smell. These are either category errors, or if the attributes can exist, they are irrelevant.

    I frequently remembered stories that I could re-tell, but was unable to remember where I I read it from, or heard about it from (radio? Was it audio or was it visual). The story itself has been distillated into a mere concept without carrying medium.

    So, I am perplexed by your claims, just like you are by others.

  73. 73. Arnold Trehub says:

    Kar: “But a thought does not necessarily take the form of an inner speech. Say if you have an appointment at 5, and you suddenly glance at the clock and it is 4:48. You immediately “know” that you are going to be late. This feeling of knowing something, I will claim, is not in anyway related to any concept of space.”

    It seems to me that you are confusing *where* the feeling of knowing is in your phenomenal world and what the feeling is *about*. In the case of knowing that you are going to be late, this inference, based on a glance at your watch, is reached pre-consciously (no phenomenal location) before you have the conscious feeling somewhere in your body envelope (phenomenal location) of knowing/recognizing something. But how do you consciously know this feeling of knowing something is particularly about being late? Without inner speech or imagery, this may be simply an emotional feeling of something being amiss. The particular unconscious inference of *I am going to be late* is part of your conscious experience only *after* the pre-conscious neuronal propositions are projected into your egocentric phenomenal world — retinoid space in my theoretical model. The greater part of our thinking takes place unconsciously. Motivational filters play a role in determining which parts of our non-conscious thought will reach consciousness in the form of our own inner speech and imagery which is located somewhere in egocentric space-time.

  74. 74. Eric Thomson says:

    Yes, content/vehicle confusions abound, as I already pointed out in this comment.

    I think such confusions are here to stay, unfortunately. It seems ineliminably enticing for everyone that starts to think seriously about consciousness.

  75. 75. Arnold Trehub says:

    Eric, you make good points about the content/vehicle confusion. I hope that this will be less of a sticking point in the future as investigators of consciousness focus more on underlying brain mechanisms and systems, and the relationship between 3rd-person (vehicle) descriptions and 1st-person (content) descriptions. Making the proper distinctions can be wobbly if properties of the relevant brain mechanisms are not specified. Incidentally, I make use of Dennett’s thought experiment in this paper:

    http://theassc.org/documents/where_am_i_redux

  76. 76. Doru says:

    Arnold,
    If “Beliefs are encoded by tokens in a semantic network as explicit propositions: whether we consider these propositions true or false is determined by a linkage with a special token which indicates the truth value.” than
    “Who” does the “encoding” and the “linkage” which seems to be an extreme amount of work and determination and requires creative intelligent design???
    Or maybe we are born with all the possible semantic propositions linked to generic tokens (both “true and “false”) and through learning our brain “prunes out” false connections tested as false in other brains, or tested in our own through repetition and experience? In other words: the brain cannot “create” a belief, it already has them all and the only thing it can do is to open up and discard the false ones that do not make sense.

  77. 77. Kar Lee says:

    Arnold [73],
    The confusion is hardly mine. Consider this question of yours:
    “Peter, would you claim that your conscious thoughts about economic inflation have no location in space?”
    What does it mean for a thought to have location in space? Obviously, if a thought is correlated with some bio-chemical processes happening in one’s brain when taken a third person view, it is obviously located inside A brain, but it is also located within the city this individual lives. Why not go finer and define which part of the brain? Keep going finer and finer in resolution, one will end up with nothing. The brain does take information from outside, so you might need to include those outside elements into the processes. The boundary is quite unclear.

    But to this person, in a first person view, what meaning is there to keep reminding oneself that this thought is in one’s brain. Does he even care? For all he is concerned, it is the inflation model he has in mind. I would argue that spatial attributes of a thought is quite irrelevant. To bring them in is to create confusions, I am afraid.

  78. 78. Arnold Trehub says:

    Doru: “If ‘Beliefs are encoded by tokens in a semantic network as explicit propositions: whether we consider these propositions true or false is determined by a linkage with a special token which indicates the truth value’ than
    “Who” does the “encoding” and the “linkage” which seems to be an extreme amount of work and determination and requires creative intelligent design???”

    The encoding and linkage to I! is done by semantic networks in the unconscious mechanisms of the cognitive brain. For example, see “The Brain’s I and States of Belief”, pp. 302-304, here:

    http://www.people.umass.edu/trehub/thecognitivebrain/chapter16.pdf

    Doru: “Or maybe we are born with all the possible semantic propositions linked to generic tokens (both “true and “false”) and through learning our brain “prunes out” false connections tested as false in other brains, or tested in our own through repetition and experience? In other words: the brain cannot “create” a belief, it already has them all and the only thing it can do is to open up and discard the false ones that do not make sense.’

    I don’t see how we could be born with a brain that contains innate propositions about every object and event that we might possibly experience in our lifetime. Do you think this is a credible theory?

  79. 79. Arnold Trehub says:

    Kar: “But to this person, in a first person view, what meaning is there to keep reminding oneself that this thought is in one’s brain. Does he even care? For all he is concerned, it is the inflation model he has in mind. I would argue that spatial attributes of a thought is quite irrelevant.”

    I don’t keep reminding myself that my thoughts are in my brain, and I certainly wouldn’t suggest that you keep reminding yourself that your thoughts are in your brain! But from a scientific stance, if you want to understand conscious thoughts, you have to start with the fact that the overwhelming weight of evidence points the fact that all thought happens in the brain. Then the question that has to be answered is “What kind of brain mechanisms and systems generate conscious thought/phenomenal experience. My claim is that you are conscious if and only if you have a brain representation of *something somewhere* in relation to your core self (I!) in retinoid space. In other words, brain representations that are *nowhere* in relation to your core self are unconscious/non-phenomenal representations. These constitute, by far, the greater amount of our thought processes. So for a person to be *consciously* thinking about economic inflation, inner speech and imagery related to economic inflation must be generated somewhere in his/her egocentric (retinoid) space. How do you know that you are currently thinking about inflation if you do not experience semantically related word in your head as inner speech, or have relevant images (e.g., graphs) in your head?

  80. 80. Doru says:

    Arnold. On page 203, you say that the discharge of the “I” tokens to the corresponding “locus retinoid” is done through labeled lines, therefore allowing lexical assignments of propositions within the semantic network.
    I am skeptic that it can be done without an initial “predetermined informational avatar” or “soul” that is “booted in” at conception by an external entity (intelligent design) or is “built in” through genetic encoding (which means that we are born with belief information about the outside world).
    It seems to me that neither of the two scenarios is plausible, and we are born with all the 2.6 trillion synaptic connections in fully connected, fully saturated and fully confused state. The number of combinatorial possibilities of different patterns is staggering (more than the number of particles in the known Universe) but that number goes down pretty fast during a life time. Most of the patterns never get reinforced, and they fade away.

  81. 81. Arnold Trehub says:

    From #28 in the SPAUN thread:

    John: “There of course is a huge amount of empirical evidence in the field of neuroscience, but what is clearly lacking is an overall perspective, a grand theory. Until that point is reached, or even a first crack at it yields predictions that provide hitherto unknown knowledge, then as far as I am concerned the knowledge of how the brain works is minimal, without necessarily being de minimis.”

    The point is that there *is* an “overall perspective, a grand theory”. It has been detailed in *The Cognitive Brain* and it has been tested in many ways. It successfully predicts a very wide range of previously inexplicable mental phenomena, and it even provides “hitherto unknown knowledge”.

    Roger Penrose recently edited a book titled *Consciousness and the Universe*. In it, I have a chapter titled: “Evolution’s Gift: Subjectivity and the Phenomenal World”. In my chapter, I describe an experiment that I believe provides decisive evidence in support of the retinoid theory of consciousness. Here is an excerpt from my chapter that gives an account of the experiment:
    ………………………………………………………………………….

    Complementary Neuronal and Phenomenal Properties

    In the development of the physical theory of light, the double-slit experiment was critical in demonstrating that light can be properly understood as both particle and wave. Similarly, I believe that a particular experiment – a variation of the seeing-more- than-is-there (SMTT) paradigm – is a critical experiment in demonstrating that consciousness can be properly understood as a complementary relationship between the activity of a specialized neuronal brain mechanism, having the neuronal structure and dynamics of the retinoid system, and our concurrent phenomenal experience.

    Seeing-More-Than-is-There (SMTT)

    If a narrow vertically oriented aperture in an otherwise occluding screen is fixated while a visual pattern is moved back and forth behind it, the entire pattern may be seen even though at any instant only a small fragment of the pattern is exposed within the aperture. This phenomenon of anorthoscopic perception was reported as long ago as 1862 by Zöllner. More recently, Parks (1965), McCloskey and Watkins (1978), and Shimojo and Richards (1986) have published work on this striking visual effect. McCloskey and Watkins introduced the term seeing-more-thanis-there to describe the phenomenon and I have adopted it in abbreviated form as SMTT. The following experiment was based on the SMTT paradigm (Trehub 1991).

    Procedure:

    1. Subjects sit in front of an opaque screen having a long vertical slit with a very narrow width, as an aperture in the middle of the screen. Directly behind the slit is a computer screen, on which any kind of figure can be displayed and set in motion. A triangular-shaped figure in a contour with a width much longer than its height is displayed on the computer. Subjects fixate the center of the aperture and report that they see two tiny line segements, one above the other on the vertical meridian. This perception corresponds to the actual stimulus falling on the retinas (the veridical optical projection of the state of the world as it appears to the observer).

    2. The subject is given a control device which can set the triangle on the computer screen behind the aperture in horizontal reciprocating motion (horizontal oscillation) so that the triangle passes beyond the slit in a sequence of alternating directions. A clockwise turn of the controller increases the frequency of the horizontal oscillation. A counter-clockwise turn of the controller decreases the frequency of the oscillation. The subject starts the hidden triangle in motion and gradually increases its frequency of horizontal oscillation.

    Results:

    As soon as the figure is in motion, subjects report that they see, near the bottom of the slit, a tiny line segment which remains stable, and another line segment in vertical oscillation above it. As subjects continue to increase the frequency of horizontal oscillation of the almost completely occluded figure there is a profound change in their experience of the visual stimulus.

    At an oscillation of ~ 2 cycles/sec (~ 250 ms/sweep), subjects report that they suddenly see a complete triangle moving horizontally back and forth instead of the vertically oscillating line segment they had previously seen. This perception of a complete triangle in horizontal motion is strikingly different from the tiny line segment oscillating up and down above a fixed line segment which is the real visual stimulus on the retinas.

    As subjects increase the frequency of oscillation of the hidden figure, they observe that the length of the base of the perceived triangle decreases while its height remains constant. Using the rate controller, the subject reports that he can enlarge or reduce the base of the triangle he sees, by turning the knob counterclockwise (slower) or clockwise (faster).

    3. The experimenter asks the subject to adjust the base of the perceived triangle so that the length of its base appears equal to its height.

    Results: As the experimenter varies the actual height of the hidden triangle, subjects successfully vary its oscillation rate to maintain approximate base-height equality, i. e. lowering its rate as its height increases, and increasing its rate as its height decreases.

    This experiment demonstrates that the human brain has internal mechanisms that can construct accurate analog representations of the external world. Notice that when the hidden figure oscillated at less than 2 cycles/sec, the observer experienced an event (the vertically oscillating line segment) that corresponded to the visible event on the plane of the opaque screen. But when the hidden figure oscillated at a rate greater than 2 cycles/sec., the observer experienced an internally constructed event (the horizontally oscillating triangle) that corresponded to the almost totally occluded event behind the screen. The experiment also demonstrates that the human brain has internal mechanisms that can accurately track relational properties of the external world in an analog fashion. Notice that the observer was able to maintain an approximately fixed one-to-one ratio of height to width of the perceived triangle as the height of the hidden triangle was independently varied by the experimenter.

    These and other empirical findings obtained by this experimental paradigm were predicted by the neuronal structure and dynamics of a putative brain system (the retinoid system) that was originally proposed to explain our basic phenomenal experience and adaptive behavior in 3D egocentric space (Trehub, 1991). It seems to me that these experimental findings provide conclusive evidence that the human brain does indeed construct phenomenal representations of the external world and that the detailed neuronal properties of the retinoid system can account for our conscious content.
    …………………………………………………………………………

    I would be interested, John, in your thoughts about the implications of this SMTT experiment.

    Here are a couple of other striking perceptual experiences (previously unexplained in terms of brain mechanisms) that can now be understood as natural consequences of the neuronal structure and dynamics of the brain’s retinoid system:

    http://www.michaelbach.de/ot/sze_shepardTerrors/index.html

    http://people.umass.edu/trehub/Rotated%20table-2.pdf

    Some other examples of empirical support of the retinoid model are given in this publication:

    http://theassc.org/documents/where_am_i_redux

    So I think it is fair to say that the theoretical model of the brain that is detailed in *The Cognitive Brain* provides an overall perspective on how the brain works, a perspective that actually has a good deal of empirical support. I think the case can be made that the retinoid model is a good candidate for a standard model of consciousness and cognition.

  82. 82. Arnold Trehub says:

    I just recently found out that there is ongoing work in other research centers where the retinoid model is being closely analyzed and re-described in detailed mathematical terms.

    See these papers by Kovalev:

    A. M. Kovalev (2011). Visual Space and Trehub’s Retinoids. *Optoelectronics, Instrumentation and Data Processing*, 47, 81-87.

    A. M. Kovalev (2012). Stability of the Vision Field and Spheroidal Retinoids*. Optoelectronics, Instrumentation and Data Processing*, 48, 620-627.

    If you look at Fig. 5 in Kovalev’s 2012 paper I think you will get a good idea of the formidable difficulty one would have in trying to build an artificial retinoid system. This job was done by nature after millions of years of biological evolution. And now we have a world to ponder.

  83. 83. George says:

    I’m a bit late to this, but –

    Interesting, the discussion about the “locating” of thoughts and experiences. If I examine my experience closely, I find that it is more accurate to say that there is a vast unstructured “place” in which my experience occurs, within which there is a “phenomenal space” (with a 3d structure underlying it) where my sensory experience appears.

    Thoughts may or may not appear in that same “space”; mostly they seem to be in a parallel-simultaneous “space”, or they can be located (or really “overlaid”) in the “phenomenal space”. There is not necessarily a spatial relationship between the phenomenal space and a particular thought’s space.

Leave a Reply