Dan Dennett confesses to a serious mistake here, about homuncular functionalism.

An homunculus is literally a “little man”. Some explanations of how the mind works include modules which are just assumed to be capable of carrying out the kind of functions which normally require the abilities of a complete human being. This is traditionally regarded as a fatal flaw equivalent to saying that something is done by “a little man in your head”; which is no use because it leaves us the job of explaining how the little man does it.
Dennett, however, has defended homuncular explanations in certain circumstances. We can, he suggests, use a series of homunculi so long as they get gradually simpler with each step, and we end up with homunculi who are so simple we can see that they are only doing things a single neuron, or some other simple structure, might do.

That seems fair enough to me, except that I wouldn’t call those little entities homunculi; they could better be called black boxes, perhaps. I think it is built into the concept of an homunculus that it has the full complement of human capacities. But that’s sort of a quibble, and it could be that Dennett’s defence of the little men has helped prevent people being scared away from legitimate “homuncular” hypotheses.

Anyway, he now says that he thinks he underestimated the neuron. He had been expecting that his chain or hierarchy of homunculi would end up with the kind of simple switch that a neuron was then widely taken to be; but he (or ‘we’, as he puts it) radically underestimated the complexity of neurons and their behaviour. He now thinks that they should be considered agents in their own right, competing for control and resources in a kind of pandemonium. This, of course, is not a radical departure for Dennett, harmonising nicely with his view of consciousness as a matter of ‘multiple drafts’.

It has never been really clear to me how, in Dennett’s theory, the struggle between multiple drafts ends up producing well-structured utterances, let alone a coherent personality, and the same problem is bound to arise with competing neurons. Dennett goes further and suggests, in what he presents as only the wildest of speculations, that human neurons might have some genetic switch turned on which re-enables some of the feral, selfish behaviour of their free-swimming cellular ancestors.

A resounding no to that, I think, for at least three reasons. First, it confuses their behaviour as cells, happily metabolising and growing, with their function as neurons, firing and transmitting across synapses. If neurons went feral it is the former that would go out of control, and as Dennett recognises, that’s cancer rather than consciousness. Second, neurons are just too dependent to strike out on their own; they are surrounded, supported, and nurtured by a complex of glial cells which is often overlooked but which may well exert quite a detailed influence on neuronal firing. Neurons have neither the incentive nor the capacity to strike out on their own. Third, although the evolution of neurons is rather obscure, it seems probable that they are an opportunistic adaptation of cells originally specialised for detecting elusive chemicals in the environment; so they may well be domesticated twice over, and not at all likely to retain any feral leanings. As I say, Dennett doesn’t offer the idea very seriously, so I may be using a sledgehammer on butterflies.

Unfortunately Dennett repeats here a different error which I think he would do well to correct; the idea that the brain does massively parallel processing. This is only true, as I’ve said before, if by ‘parallel processing’ you mean something completely different to what it normally means in computing. Parallel processing in computers involves careful management of processes which are kept discrete, whereas the brain provides processes with complex and promiscuous linkages. The distinction between parallel and serial processing, moreover, just isn’t that interesting at a deep theoretical level; parallel processing just a handy technique for getting the same processes done a bit sooner; it’s not something that could tell us anything about the nature of consciousness.

Always good to hear from Dennett, though. He says his next big project is about culture, probably involving memes. I’m not a big meme fan, but I look forward to it anyway.

74 Comments

  1. 1. Dennett on Neurons says:

    [...] H/t Peter over at Conscious Entities [...]

  2. 2. Roy Niles says:

    Neurons evolved to serve an organism’s purpose, and not to acquire their own purposes except to strategically improve, perhaps, the organism’s.
    And as to consciousness being a matter of multiple drafts, if they’re competing, it’s also as part of a purpose serving function – part of our complicated choice making system that selects a set of most workable options.
    The consciousness “drafts” involved are elements of intelligent awareness, some more intelligently aware at times than others.
    Dennett has not believed that we evolved to serve our own purposes, but lately he seems to be reconsidering that position.

  3. 3. Arnold Trehub says:

    Peter, you wrote: “The distinction between parallel and serial processing, moreover, just isn’t that interesting at a deep theoretical level; parallel processing just a handy technique for getting the same processes done a bit sooner; it’s not something that could tell us anything about the nature of consciousness.”

    Exactly! Look at Fig.16.1 here: http://people.umass.edu/trehub/thecognitivebrain/chapter16.pdf. It is *series* and *parallel* and *feed-forward* and *feed-back* in a very complex architecture. Individual cells don’t tell us the secret of consciousness. It is the specialized neuronal *mechanisms*, in which the cells are the key components, that have the functional competence to give us our conscious experience. And, from the standpoint of scientific understanding, theoretical models of the detailed structure and dynamic properties of these neuronal brain mechanisms enable us to make predictions about conscious phenomena and test the validity of the candidate theories.

    In a metaphorical sense, our brain does contain multiple drafts of what is to be conscious experience, but each “final” draft must be determined by the interplay of particular kinds of brain mechanisms. The specification of these neuronal mechanisms and their organization into larger cognitive systems is where the answers we seek will be found.

  4. 4. Tom Clark says:

    Dennett will be giving a talk for the Consciousness Online conference, see http://consciousnessonline.com/program-2013/and they’ve made a paper by him and Michael Cohen available, Consciousness Cannot be Separated from Function, at http://consciousnessonline.com/. D&C claim to show that you don’t have phenomenology without informational access (no phenomenal consciousness without access consciousness), but don’t explain why phenomenology is associated with informational functions. That explanation, they say, is a ways off, and cite some functionalist/representationalist research programs as promising beginnings. At least they admit the existence of phenomenology as in need of explanation.

  5. 5. Peter says:

    Tom – many thanks for that; looks as if there’s a lot of good stuff at the conference. I must remember to visit.

  6. 6. VicP says:

    Since human technology follows nature I find most analogies to technology cumbersome, like comparing bird flight to aircraft technology. Parallel and serial processing are strong types but humans are more analog than digital. Multiple Drafts inclines me more towards sitting in a room with the window open, door open and poor heat causing me to put on a sweater and sit closer to the fire.

    They recently posted “Bill Gates 11 Rules To Live By”, which are more adaptive to 11 mental algorithms not necesarily listed in their interrupt priority sequence.

  7. 7. Peter says:

    I certainly think the analog/digital distinction is more interesting. I’m old enough to remember the time when it was quite unclear whether digital or analog computers would inherit the earth.

  8. 8. Arnold Trehub says:

    If you think about computational artifacts, the analog/digital distinction is relevant. But if you think about the brain as a cognitive organ then an analog/proposition or neuronal-image/neuronal-token distinction is more appropriate.

  9. 9. VicP says:

    Only 22 minutes into the interview but your link did not work, but here’s the one I used.

    http://www.edge.org/memberbio/daniel_c_dennett

    Thanks for the post, I’ll be back.

  10. 10. Roy Niles says:

    Are you guys kidding? Awareness is more digital than analog, but since it’s an anticipatory and reactive function it has a strategic presence that is neither.

  11. 11. Vicente says:

    I think there is point missing in this discussion. In the Organ-Cell relationship analysis, the brain is is quite different from other organs. For example, you can live with just 1/8 of one kidney, renal function remains sufficient, you can live with 1/2 lung, you can live with 1/4 of the liver, you can live with the heart function diminished significantly…etc. But the brain cognitive function doesn’t work like that. Spoil 1%, and depending where the damage takes place, the whole brain becomes useless for higher functions. So, does it make sense to talk about isolated neurons. One nephrone can filter blood on its own, but can a neuron process information on its own? I don’t think so, no input no output, useless. I would say the brain is much more than the sum of its parts, architecture and dynamics as important.

    A different issue is that probably some individual neuronal features are central to explain phenomenal consciousness… Maybe Hammeroff’s tubules play a role here.

  12. 12. Tom Clark says:

    Vicente:

    “A different issue is that probably some individual neuronal features are central to explain phenomenal consciousness… Maybe Hammeroff’s tubules play a role here.”

    I’m not sure why you think this given that phenomenal consciousness seems to be associated with certain sorts of higher, information integrating functions, see http://www.naturalism.org/kto.htm#Neuroscience and see Dennett’s paper linked previously in this thread. These functions, as you point out, cease to work if a certain perhaps small proportion of the neural connectivity is wiped out. And when they go, so does consciousness, albeit selectively depending on the extent and type of damage.

  13. 13. Vicente says:

    Tom,
    I’m not sure why you think this given that phenomenal consciousness seems to be associated with certain sorts of higher, information integrating functions,

    Sorry, I was sort of trying to state, not the opposite, but that the idea that, let’s say, the canvas and paints for the phenomenal “projection” (mental movie?), could be based upon some biophysical properties of each individual neuron… as for the higher functions it seems to me that we cannot look at them as the result of adding the activity contributions of several single units.

    Sorry for the metaphorical excess.

  14. 14. Tom Clark says:

    If the types and internal architecture of neurons associated with consciousness were substantially different than those not, that would support your hypothesis, but as far as I know there is no such difference. The “mental movie” (e.g., when having a dream) seems associated with the activation of certain global, interconnected processes, hence likely a matter of information integrating functions, not the types of neurons carrying out those functions, which after all also play roles in more encapsulated functions *not* associated with consciousness.

  15. 15. Vicente says:

    Well, when having a dream or watching a sunset, doesn’t matter. The point is that the script, the plot, of the mental content could depend on the processes, but the qualia to produce your inner images and sounds, could rely on individual neuronal features. What is there in visual area V4 that can process/create color?

    If you artifially stimulate with electrodes some areas you can induce a “raw qualia experience” (blocthy?), not related with the activation of certain global, interconnected processes, hence NOT likely a matter of information integrating functions.

  16. 16. Tom Clark says:

    “If you artificially stimulate with electrodes some areas you can induce a “raw qualia experience” (blocthy?), not related with the activation of certain global, interconnected processes, hence NOT likely a matter of information integrating functions.”

    I’m assuming the subject is already conscious during this experiment, so the fact that the stimulation results in particular raw sensations doesn’t count against the observation that it takes an entire informational network to support consciousness of those sensations.

    The question “What is there in visual area V4 that can process/create color?” is what needs to be asked of what eventually are found to be the NCC: what is it about the particular informational functions associated with consciousness that entails its existence? There’s no evidence I’m aware of that suggests consciousness depends on the characteristics of the individual neurons that instantiate the NCC, but lots that it’s the larger scale organization that matters.

  17. 17. Arnold Trehub says:

    Tom: “The ‘mental movie’ (e.g., when having a dream) seems associated with the activation of certain global, interconnected processes, hence likely a matter of information integrating functions, not the types of neurons carrying out those functions, which after all also play roles in more encapsulated functions *not* associated with consciousness.”

    I agree. But why not acknowledge that the SMTT experiments [1] show that the neuronal structure and dynamics of the brain’s putative retinoid mechanisms are what generate our conscious experiences/”mental movies”? To my knowledge, the retinoid model of consciousness is the *only* theoretical model that has been able to account for the systematic creation and control of complex conscious features/qualia, in the absence of corresponding sensory input, that have been observed in the SMTT experiments. These experiments demonstrate that the function of this particular brain mechanism is to give us our conscious experience of the world around us.

    1. See “Space, self, and the theater of consciousness”, pp. 324-325 here: http://people.umass.edu/trehub/YCCOG828%20copy.pdf.

  18. 18. Vicente says:

    Now it would be a good time to ask about what is the definition of consciousness we are using.

    What I am saying is that to write a comic as a sequence of frames or cartoons, you need a meanigful story, a plot, and you also need a sheet of paper and color pencils… for the first requisite you need the whole brain, you need processes, for the latter, what do you need? What is the specific feature of V4 neurons? if any.

    The conceptual treatment cannot be approached on additional basis, is not the result of adding single neurons activities. The raster and the soundtrack (plus other qualia), could be.

  19. 19. Tom Clark says:

    Arnold,

    “…why not acknowledge that the SMTT experiments [1] show that the neuronal structure and dynamics of the brain’s putative retinoid mechanisms are what generate our conscious experiences/’mental movies’?”

    My reasons for not acknowledging this were covered extensively in our discussion in Pockett Redux, http://www.consciousentities.com/?p=1296 ending in my msg #100, so I won’t reproduce them here or discuss them in this thread.

    Vicente:

    “The raster and the soundtrack (plus other qualia), could be [the result of adding single neurons activities].”

    But is there any *evidence* for this hypothesis? There’s a good deal of evidence that qualitative experience only accompanies the operation of information integrating functions, see the references mentioned above.

  20. 20. Arnold Trehub says:

    Vicente: “Now it would be a good time to ask about what is the definition of consciousness we are using.”

    Here’s the definition I’ve proposed:

    *Consciousness is a transparent brain representation of the world from a privileged egocentric perspective* [1]

    It has been empirically demonstrated that the retinoid model of consciousness can explain this kind of brain representation (see my comment #17 above).

    To invent anything, whether it is a new story, a new cartoon strip, a new scientific theory, or a new kind of machine, we have to imagine something in our phenomenal world that is novel — a possible way to arrange things and processes that did not formerly exist. To do this we need to set a goal, recall images from our learned store of past experiences in our synaptic matrices and semantic networks, project these images into retinoid space as parts of a *possible* world, and modify and rearrange them to satisfy our goal. This is human creativity. An explication of the mechanisms needed for creativity is given in *The Cognitive Brain* [2].

    Vicente: “What is the specific feature of V4 neurons? if any.”

    V4 neurons are arranged retinotopically. They represent the edges and contours of retinal images within a fovea-centered frame. The features of color, motion, relative size, and egocentric location of these visual images depend on other mechanisms in the visual system, and finally on their proper binding before they are projected into retinoid space where they become part of our phenomenal world.

    1. http://theassc.org/documents/where_am_i_redux.

    2. http://people.umass.edu/trehub.

  21. 21. Vicente says:

    Fine, I understand and accept your point, I’ll try again mine, even if I,m fed up of writing with a smartphone.

    There are experiments that prove that the visual system is sensitive to one single photon hitting one retinal neuron. Besides, one V4 neuron or a very small number, can be directly stimulated with an electrode. In both cases you get a phenomenal response, like a color glimmer… and no ordinary visual processing is involved as a result of experimental boundaries setup. So, it seems that individual neuron activity entails a simple phenomenal response (NCC). Just like painting a dot on the paper. For the whole strip you need the process. What is the mechanism in that neuron responsible for the NCC?

  22. 22. Arnold Trehub says:

    Tom,

    I missed your response #100 in the Pocket thread. Reading it now, I see that you wrote:

    “But still, it’s the *world* that appears to the system *via the model* [emphasis mine], not the model that appears.’

    If you grant that the model (retinoid space?) provides the appearance of the world to the system (transparently) , doesn’t this imply that the pattern of neuronal activity in the model is our phenomenal world? And since we are directly acquainted with our phenomenal world, this must be the world that gives us the quality of our perceptions. A case in point: We observe the moon at the horizon as much larger than the moon at its zenith. But the moon projects essentially the same size to our retina (~0.5 deg) whether on the horizon or high in the sky. So our subjective observation of the moon is of its *image as modified in retinoid space*, and not directly of the moon as it exists in real space.

  23. 23. Arnold Trehub says:

    Vicente: “What is the mechanism in that neuron responsible for the NCC?”

    It is *not* the mechanism in that *one* neuron that is responsible for the NCC. It is the activation of a whole chain of *neuronal mechanisms*, by the stimulation of that one neuron. The whole process culminates in the activation of one or more autaptic cells somewhere in retinoid space. This is the phenomenal response that is the conscious correlate of the stimulation of the neuron in V4.

  24. 24. Vicente says:

    But it doesn’t work like that, does it?

    You can’t start an ordinary visual process stimulating a neuron anywhere you decide to. In addition, there is no guarantee that the artificial stimulus will trigger the right action potential. In the single photon case the stimulus is so abnormal that I can’t believe that it starts the chain.

    What about the stuff you see in absolute darkness? They are the result of the visual cortex noise, there is no process involved.

  25. 25. Tom Clark says:

    Arnold,

    “If you grant that the model (retinoid space?) provides the appearance of the world to the system (transparently) , doesn’t this imply that the pattern of neuronal activity in the model is our phenomenal world?”

    Well, the neuronal activity is at least the NC of our phenomenal world. Whether it’s identical to that world is another question, unsettled as yet I think. We’re not in an observational relationship to our phenomenal world, rather we consist of it as subjects having experience.

    “And since we are directly acquainted with our phenomenal world, this must be the world that gives us the quality of our perceptions.”

    I don’t think we’re acquainted with our phenomenal world (an epistemic, observational relationship), since we consist of it as subjects. Rather, we’re acquainted with the world (we have an epistemic relationship to it) via our neurally instantiated models of it. And those models somehow entail the existence of our phenomenal worlds.

    “A case in point: We observe the moon at the horizon as much larger than the moon at its zenith. But the moon projects essentially the same size to our retina (~0.5 deg) whether on the horizon or high in the sky. So our subjective observation of the moon is of its *image as modified in retinoid space*, and not directly of the moon as it exists in real space.”

    I’d say rather that we observe the moon in real space using our sensory-perceptual-cognitive systems. We don’t observe its image in retinoid space since that’s the modeling of the moon that our sensory-perceptual-cognitive systems carry out and we’re not in a position to observe that process. Subjectively we consist (in part) of the experience that the moon looks larger when we observe the *moon* (not our experience of the moon) near the horizon.

  26. 26. Arnold Trehub says:

    Vicente,

    A very large body of empirical evidence suggests that it really does work that way.

    Vicente: “You can’t start an ordinary visual process stimulating a neuron anywhere you decide to.”

    I certainly agree with that statement.

    Vicente: “What about the stuff you see in absolute darkness? They are the result of the visual cortex noise, there is no process involved.”

    It depends on what kind of brain activity you choose to call “noise” as distinct from what you choose to call “signal”. This is theory dependent. If you probe neuronal activity in the visual system with micro-electrodes under complete darkness and hook up the measuring system to an audio amplifier, you will hear a concert of popping noises as the neurons fire off. This occurs in the absence of retinal stimulation. But any kind of brain event that causes your “seeing stuff” *must* be a biophysical *process*. Again, this claim rests on a vast amount of empirical evidence.

  27. 27. Vicente says:

    Just to clarify, I meant that it doesn’t work like that in those particular cases.

    Imagine that you have an isolated in-vitro retinoid organ, and you somehow stimulate it simulating a possible natural stimulus that could take place in-vivo in a brain. Do you think there would some associated phenomenal experience. According to your theory it should, or not. An experience without owner…!?

  28. 28. Arnold Trehub says:

    Tom: “I’d say rather that we observe the moon in real space using our sensory-perceptual-cognitive systems.”

    You are describing the *behavioral act* of *looking* at the moon, not the*cognitive act* of *observing* the moon. *Looking* merely entails a visual fixation of the moon in real space, whereas *observing*requires the targeting of selective attention on the brain’s representation of the moon in egocentric space — the moon as it appears to us in our phenomenal world — then detecting it, and forming some kind of judgement about it. It is the detection and forming a judgement about the features of the moon as it appears to us that is our act of cognitive observation as distinct from the overt behavior of looking at the moon. I suppose a behaviorist would say that we observe the moon by looking at it.

  29. 29. Tom Clark says:

    Arnold,

    If there’s a cognitive act of observing the actual moon, then it’s the actual moon we form a judgment about, not our representation of the moon. Of course we can focus attention on our *experience* of the moon (“the moon as it appears to us in our phenomenal world”) and form judgments about that too. But when we say we’re observing the moon we usually are referring to the moon, not our experience of it or neurally instantiated representations activated by sensory-perceptual input. A bit of ordinary language philosophy I guess…

  30. 30. Vicente says:

    Arnold, Tom, this debate has two outcomes: HOT’s and infinite sequences of observers, or an intrinsic observer…both possibilities seem conceptually out of range to me.

    Tom, if you were studying a star with a radiotelescope, are you observing the star or the computer processed images of the star? I see our senses just as any other observation and exploration device… and artifitial observation instruments as extensions/enhancements of them.

    As usual, the main problem here is that there is no moon unless some conscious entity happens to come across it and created it as the moon. So, you are no observing the moon, not even the internal “representation” of the moon, but the moon you created.

    The question was: Can the activity of one single isolated neuron lead to any conscious outcome, of any kind…

  31. 31. Arnold Trehub says:

    Tom: “But when we say we’re observing the moon we usually are referring to the moon, not our experience of it or neurally instantiated representations activated by sensory-perceptual input. A bit of ordinary language philosophy I guess…”

    True, most people think that they are directly observing the physical moon, not their phenomenal experience (retinoid representation) of the physical moon. This kind of folk psychology is captured in our ordinary language. But we are in the business of trying to *understand* our conscious experience. In this effort, our ordinary language formulations can lead us astray.

  32. 32. Arnold Trehub says:

    Vicente: “Imagine that you have an isolated in-vitro retinoid organ, and you somehow stimulate it simulating a possible natural stimulus that could take place in-vivo in a brain. Do you think there would some associated phenomenal experience. According to your theory it should, or not. An experience without owner…!”

    Yes, according to the retinoid theory of consciousness if one isolated a living retinoid system with at least one synaptically attached synaptic matrix for sensory input and subjected it to diffuse arousal excitation (to wake it up), it should be able to have a phenomenal experience. The “owner” of the experience would be the privileged perspectival origin of its retinoid space, its core self (I!). But, of course, there would be no way to test whether the isolated system had a conscious experience because there would be no way that the system could report its experience.

  33. 33. Arnold Trehub says:

    Vicente: “The question was: Can the activity of one single isolated neuron lead to any conscious outcome, of any kind…”

    Anything is possible, but I don’t see any scientific justification for believing that the activity of a single neuron can lead to consciousness.

  34. 34. Tom Clark says:

    Arnold:

    “True, most people think that they are directly observing the physical moon, not their phenomenal experience (retinoid representation) of the physical moon. This kind of folk psychology is captured in our ordinary language. But we are in the business of trying to *understand* our conscious experience. In this effort, our ordinary language formulations can lead us astray.”

    Re understanding consciousness, we disagree about whether we’re in an observational relationship to our own experience. You think we are, I think not as argued in Killing the Observer (JCS, 2005), http://www.naturalism.org/kto.htm But in any case, there is a perfectly good, ordinary sense in which we directly observe physical objects using our sensory-perceptual systems, sometimes aided by various amplifying and measuring devices. A cognitive creature’s sensory contact with the world, and the resulting internal models that track the world reliably, is direct as observation can get. So I don’t think ordinary language leads us astray in this instance.

  35. 35. Jorge says:

    Tom, you wrote:
    “…in any case, there is a perfectly good, ordinary sense in which we directly observe physical objects using our sensory-perceptual systems, sometimes aided by various amplifying and measuring devices. A cognitive creature’s sensory contact with the world, and the resulting internal models that track the world reliably, is direct as observation can get.”

    I’m actually inclined to agree more with Arnold here. Although you are correct that our cognitive models are “good” in the evolutionary sense, there is a ton of missing information from what is integrated by consciousness. As such, it’s not trivial or philosophical hair-splitting to say that the internal model is NOT the same as noumenal reality. There may be a number of occluded perceptions that simply do not make it to conscious access or can be hidden by self-serving bias engines deep in the brain.

  36. 36. Tom Clark says:

    Jorge,

    I entirely agree with the points you raise: the model is not the reality since it’s a very selective rendition shaped by adaptation, and not all perceptions make it into consciousness. But is there a better, more direct way for cognitive systems such as ourselves to observe and know the world than using sensory-perceptual data to build and update a behavior-guiding model?

  37. 37. Arnold Trehub says:

    Tom: “… the model is not the reality since it’s a very selective rendition shaped by adaptation, and not all perceptions make it into consciousness.”

    It is important to distinguish *sensations* from *perceptions* in terms of brain function. Sensations are patterns of neuronal activity in our many diverse pre-conscious sensory modalities (synaptic matrices?), while perceptions are these sensory patterns that have been bound in proper spatio-temporal register and projected in egocentric perspective within our global phenomenal world space (retinoid space?). So, in this view, all perceptions are conscious.

    Tom: “But is there a better, more direct way for cognitive systems such as ourselves to observe and know the world than using sensory-perceptual data to build and update a behavior-guiding model?”

    In the retinoid theory our primitive egocentric brain model of the world (our phenomenal world) is not built by “sensory-perceptual data”; it is an innate evolutionary adaptation that uses sensory-perceptual data to enrich and update the content of our global phenomenal world. Our observations of/about the *world*, as distinct from our isolated sensory discriminations, must consist of sensory features (pre-conscious) that had been the targets of selective attention and had been parsed out of the global plenum of our occurrent phenomenal world. This is why we do not observe/perceive the moon as having a constant size as it rises above the horizon.

    Notice that in this formulation the self is not an observer. (Is this consistent with your “killing the observer”, Tom?) Observation is a function of our pre-conscious sensory mechanisms extracting information from our global phenomenal world.

  38. 38. Callan S. says:

    I’m not sure what ‘domesticated’ means here, in regards to brain cells?

    The problem with the ‘domesticated’ is it implies domesticated TO someone.

    And yet were talking about the things that make up someone.

    Who are these synapses supposed to be domesticated to?

  39. 39. Charles Wolverton says:

    Arnold -

    “Our observations of/about the *world*, as distinct from our isolated sensory discriminations, must consist of sensory features (pre-conscious) that had been the targets of selective attention and had been parsed out of the global plenum of our occurrent phenomenal world. This is why we do not observe/perceive the moon as having a constant size as it rises above the horizon.”

    Like Tom, I find the word “observation” suggestive of the Cartesian theater. OTOH, I think the quoted description of “observations of/about the world” is consistent with my guess at what’s going on, only I’d say it a bit differently. Something like:

    Repeated sensory input due to selective attention to objects/events in the world (eg, viewing the moon) in different contexts (eg, near a visible horizon or far removed from it) results in the ability to detect patterns of neural activity (AKA, “phenomenal features”) which result in related context-dependent phenomenal experiences (eg, the moon’s apparent size in phenomenal space decreasing with increasing distance from the horizon).

    Does that “observation”-free description capture the intent of the quote?

  40. 40. Arnold Trehub says:

    Charles: “Repeated sensory input due to selective attention to objects/events in the world (eg, viewing the moon) in different contexts (eg, near a visible horizon or far removed from it) results in the ability to detect patterns of neural activity (AKA, “phenomenal features”) which result in related context-dependent phenomenal experiences (eg, the moon’s apparent size in phenomenal space decreasing with increasing distance from the horizon)….. Does that ‘observation’-free description capture the intent of the quote?”

    It does in a rough way. But it seems to suggest that the moon illusion depends on one having more than one experience of a rising moon to have the phenomenal experience of the moon
    diminishing in size as it rises from the horizon. Evidence suggests (and the retinoid model predicts) that the moon illusion will be phenomenally experienced upon one’s first exposure to a rising moon. For a bit more about this, see “Space, self, and the theater of consciousness’, pp. 323-324, here: http://people.umass.edu/trehub/YCCOG828%20copy.pdf

  41. 41. Arnold Trehub says:

    BTW, if we take a *theater* as a metaphor, and think of consciousness as the bright stage (retinoid space) of the theater and the audience in the dark of the theater as the separate observers (sensory/cognitive mechanisms) of the stage, would you consider this a Cartesian theater or an apt metaphor?

  42. 42. Vicente says:

    Why !!?? why did you leave the audience, the cognitive element required for the play to make sense (meaning!), the most important element, out of the conscious side. Why?? Consciousness requires of all the ingredients… intelligence mostly important. The stage without audience is nothing, and the audience without stage does nothing. Actually, in a way, the audience creates the stage. Everyday there is increasing evidence about the brain creating its own reality, neglecting and ignoring senses input to a significant extent.

    This is why it is so difficult to have sensible discussions…

    You are right Arnold, it all depends on the definition of consciousness each of us uses. Unfortunately there is no definition.

  43. 43. Arnold Trehub says:

    Vicente: “Why !!?? why did you leave the audience, the cognitive element required for the play to make sense (meaning!), the most important element, out of the conscious side. Why??”

    Because, according to the retinoid theory, consciousness is nothing other than our present phenomenal world (subjective experience). It is *this phenomenal world* that our sensory/cognitive mechanisms (non-conscious and pre-conscious) have to analyze and make sense of. By then projecting (via recurrent axons) this sense back into our phenomenal world (egocentric retinoid space), they give us a conscious experience that we call meaning. Brain studies indicate that it takes ~0.5 seconds for the sensory/cognitive events to go through this recurrent loop before they become part of our conscious experience. As you read these words, they are processed pre-consciously in the “dark audience” of your brain’s sensory-cognitive modalities before they mean anything to you in your “bright stage” of consciousness.

    Vicente: “Unfortunately there is no definition [of consciousness].”

    Here is my definition:

    *Consciousness is a transparent representation of the world from a privileged egocentric perspective*

    The retinoid model of consciousness is consistent with this definition. For example see http://theassc.org/documents/where_am_i_redux .

  44. 44. Vicente says:

    Arnold,

    So the whole process, including the analysing, making sense, and projecting back is part of the conscious experience.

    Consciousness is much more than that definition. It is the representation, and the understanding of the representation, and the emotions linked to that understanding… and the expectations of future representations and memories of past ones, etc…

    I don’t agree at all. Actually I believe that epistemological approaches to consciousness are unsatisfactory by nature.

    My ontological definition:

    Consciousness is the enabler of existence. To be conscious is to be.

    Now, how to put that on a neurological basis? That is the problem.

  45. 45. Vicente says:

    I know, I said there is no definition and I gave one straight away…

  46. 46. Arnold Trehub says:

    Vicente: “Consciousness is the enabler of existence.”

    Are you claiming that nothing existed before consciousness existed?

    No space-time?

    No fundamental forces?

    Nothing?

    So how did consciousness come into existence out of nothing?

  47. 47. Charles Wolverton says:

    Arnold -

    I reject the theater metaphor since like Tom, I’m homicidal vis-a-vis “observers”. Any value the metaphor may have as a high-level heuristic seems to me overwhelmed by the damage it does in suggesting that we make decisions based on a literal “picture” the world (AKA, our phenomenal experience). One can, of course, think of interpretive processes (“sensory/cognitive mechanisms”) working on neural activity as “observers” of that activity, but I don’t see that doing so helps. It can lead one into thinking that events “seen on the stage” – ie, the accompanying phenomenal experience – provide a basis for general decision making, whereas the ~0.5 sec delay in creating phenomenal experience seems to suggest that such experience can’t play a role in quick reaction decision making, and blindsight seems to suggest that it isn’t required for some perceptual tasks.

    Yes, I guessed wrong on the mechanics of the moon illusion, thinking that the presence or absence of a visible horizon in learned past experiences might have affected the perceptual process. But your assumption of evolutionarily beneficial anisotropy of the mapping of physical space (or visual sensory space??) to neuronal space seems reasonable. I assume that mapping neural activity due to stimulation of a viewer’s retinal receptors (eg, by light from a low-angle moon) to more I!-proximate Z-planes (effectively) applies the interpretive processing to input from a smaller area of the FOV, thereby potentially increasing effectiveness of the processing.

    This seems somewhat analogous to an analyst inspecting a specimen under a microscope and moving the microscope lens closer to the specimen, thereby effectively applying the greater “processing power” of the analyst’s more centrally located – and denser – retinal receptors to a smaller physical area of the specimen.

  48. 48. Richard J R Miles says:

    Arnold,46 re.Vicente,44.

    You can be quite pedantic when it suits you.

  49. 49. Vicente says:

    Arnold, I don’t know if know where you’re getting into…

    So how did consciousness come into existence out of nothing?

    So how did anything come to existence out of nothing? or else, has there always been something? but how did that come to existence? hmmm out nothing? or is it eternity the very nature of everything?

    Now, serious, the point is that if on the top of some Himalaya’s mountain there is piece of rock on the ground that nobody will ever know about, except for its tiny contribution to the Earth’s gravity… then, that particular piece of rock does not exist.

    Even more, whatever the big bang produced, was not space-time, particles and forces, etc, those are subproducts (concepts) of intelligent minds, not real stuff… and I really don’t want to resume the ancient debate about substance…

    Reality is “strictly and in every sense” observer dependent. And I don’t like killing observers.

  50. 50. Arnold Trehub says:

    Vicente: “Reality is “strictly and in every sense” observer dependent. And I don’t like killing observers.”

    In science, physical reality is not observer dependent. Our *measurements* and *concepts* of reality are observer dependent.

    I don’t like killing observers either. In the retinoid model, our pre-conscious sensory and cognitive mechanisms are our observers,
    and retinoid space is our conscious experience of our pre-conscious observations.

    Also, i bellieve

  51. 51. Arnold Trehub says:

    Vicente: “Reality is “strictly and in every sense” observer dependent. And I don’t like killing observers.”

    In science, physical reality is not observer dependent. Our *measurements* and *concepts* of reality are observer dependent.

    I don’t like killing observers either. In the retinoid model, our pre-conscious sensory and cognitive mechanisms are our observers,
    and retinoid space is our conscious experience of our pre-conscious observations.

    Also, I believe that the idea of the existence of *nothing* is incoherent.

  52. 52. Vicente says:

    Arnold,

    In science, physical reality is not observer dependent

    Regardless effects like the quantum wave function collapse as a result of measurements, in general, physical models are not observer dependent, much worse, they are observer created.

    “Nothing” is beyond our conceptual capabilities, I agree. We, humans, cannot understand or imagine what “nothing” is. Probably, maybe, brain systems, like the retinoid system, that forces space into our “thinking scenario”, are responsible for that.

  53. 53. Arnold Trehub says:

    Vicente: ” … physical models are not observer dependent, much worse, they are observer created.”

    Physical models are *both* observer dependent and observer created, but why do you say that the observer-created model is “much worse” than the observer-dependent model?

    Vicente: “Probably, maybe, brain systems, like the retinoid system, that forces space into our “thinking scenario”, are responsible for that.”

    Exactly so !!!

  54. 54. Arnold Trehub says:

    Charles: “It [the theater metaphor] can lead one into thinking that events “seen on the stage” – ie, the accompanying phenomenal experience – provide a basis for general decision making, whereas the ~0.5 sec delay in creating phenomenal experience seems to suggest that such experience can’t play a role in quick reaction decision making, and blindsight seems to suggest that it isn’t required for some perceptual tasks.”

    But phenomenal experience/content *does* provide a basis for *reflective* decision making, even though *reflexive* “decisions” and blindsight can bypass cognitive analysis of phenomenal content.

    Charles: “I assume that mapping neural activity due to stimulation of a viewer’s retinal receptors (eg, by light from a low-angle moon) to more I!-proximate Z-planes (effectively) applies the interpretive processing to input from a smaller area of the FOV, thereby potentially increasing effectiveness of the processing.”

    In the moon illusion, the retinal image of the low-angle moon is actually mapped to a more I!-distal Z-plane than the high-angle moon because the availability of distal Z-planes decreases as a function of an object’s elevation in egocentric space. The low-angle moon looks larger than the high-angle moon for the same reason that an after-image appears to grow in size as the distance of a fixated surface increases; it is due to the paradoxical effect of the retinoid’s size-constancy mechanism when the size of the retinal image remains constant over changes in an object’s *apparent* distance from the observer. For more about this see pp. 318-320 here: http://people.umass.edu/trehub/YCCOG828%20copy.pdf .

  55. 55. Vicente says:

    Well, I say that because I somehow understand it as a step towards solipsism, which I dislike and makes me feel lonely.

    Note that it is not that difficult to extent the idea from physical systems to persons…

    This is why it is so important to clean our minds from stupid prejudice and programming, and try to have a clean view of systems and people…

  56. 56. Charles Wolverton says:

    “phenomenal experience/content *does* provide a basis for *reflective* decision making”

    My impression is that this remains an open question. Here’s the argument leading to my skepticism. Where do you think it goes wrong?

    1. Whatever “processing” (quotes meant to indicate that I have no specific implementation in mind) goes on in producing responses to neural activity consequent to sensory stimulation, that neural activity is the only input from the immediate external environment (as opposed to inputs from internal sources, eg, memory). I envision the processing as comprising two top level subprocesses:

    (a) = generate the phenomenal experience (eg, “picture” a rapidly approaching object)

    (b) = analyze the pictorial representation of the environment, identify response options (eg, duck), choose and implement one.

    2. In some instances the information available in the neural activity is sufficient for (b) without (a), so (a) unnecessarily delays a response.

    4. “Reflexive” responses are often time critical, so (a) can, and should, be eliminated in producing them.

    5. Blindsight supports this thesis and provides other response types that aren’t dependent on step (a).

    6. This suggests that either (a) and (b) are parallel processes or that (b) is temporally prior to (a) – in which case (a) could be causally inert.

    The existence of phenomenal experience certainly suggests evolutionary benefits, although perhaps ones that are subtle and hence not immediately obvious. I agree that “reflective” responses seem a more promising hunting ground for them, but so far I’ve neither come up with any specific examples myself or encountered others who have.

    “*reflexive* “decisions” and blindsight can bypass cognitive analysis of phenomenal content”

    Clearly, I don’t see those as “bypassing cognitive analysis of phenomenal content”, I see them as the result of cognitive analysis of the neural activity that is performed as part of (b). Perhaps we have a terminology disconnect here.

  57. 57. Arnold Trehub says:

    Charles, consider this scenario for primitive man:

    M = man
    T = tiger
    R = river
    F = food

    M —————————————————> distance

    (1) M ————–> F —-> R ———> T. M action?

    (2) M ————–> F —> T ———> R. M action?

    Wouldn’t the survival of M depend on a reflective decision contigent on a cognitive analysis of each situation on the part of M?

  58. 58. Charles Wolverton says:

    Yes, but I see our disagreement (or perhaps only disconnect) as being the architectural relationship between “cognitive analysis” and the production of phenomenal experience, not whether reflective analysis has survival value. I infer that you see the latter as providing input to the former. I don’t, nor do I even know what it would mean for a processor to have an “experience” as input – it certainly can’t literally observe one. And like “reflexive”, I see “reflective” as suggesting the time required for a decision, not whether or not the decision is based on phenomenal experience.

    Consider the simple case of a subject viewing a screen on which are projected various colored standard shapes. Asked to describe what is seen, a normal subject might answer something like “a red disc in the upper left hand corner, a blue triangle in the upper right hand corner, a green square in the middle, …”. Now, one can easily imagine a computer programmed to analyze the output of a digital camera, identify which of a set of simple geometric shapes and basic colors are being “viewed” and where, and then synthesize the same verbal “description”. I envision a subject’s brain working something like that. But unlike the test setup, a subject’s brain additionally creates (for the subject) the phenomenal experience of a mental image of the figures – an arguably unnecessary processing capability.

    Because almost all of us have phenomenal experience but (almost?) none of us directly experience the associated “cognitive analysis”, we assume that we respond based on the former. But blindsight – and less dramatically, reflexive reaction – call that assumption into question since the subject has neither experience but can nevertheless demonstrate behavior consistent with the analysis processor’s working (to some extent). And that could happen were the two processes either in parallel or in the temporal sequence “first analyze, then image” and were the latter to cease functioning. But couldn’t happen were the temporal sequence reversed.

    This all seems obvious to me, so apparently I’m missing something.

  59. 59. Arnold Trehub says:

    Charles”Yes, but I see our disagreement (or perhaps only disconnect) as being the architectural relationship between “cognitive analysis” and the production of phenomenal experience, not whether reflective analysis has survival value.”

    OK, here’s my architectural schematic of the relationship between phenomenal experience and cognitive analysis. Remember that I define phenomenal experience (conscious content) as a transparent brain representation of the world from a privileged egocentric perspective. In my theoretical model this is just the global autaptic-cell excitation pattern in retinoid space in our extended present. Situations 1 and 2 in my post #57 illustrate salient parts of M’s conscious content during each of the situations. In order to act adaptively, M has to attend to the critical elements in the scenario, recognize them, and assess the relevant affordances. This cognitive analysis can happen only *after* M has the relevant phenomenal experience of his personal situation. So the minimal architecture and flow chart would look like this:

    [real world] –> [phenomenal world (retinoid space)] –> [cognitive analysis (pre-conscious brain mechanisms)] –> [adaptive action] –> [real world] –> [phenomenal world], etc.

    How would you sketch it?

    You might get a better idea about this from these two papers:

    http://theassc.org/documents/where_am_i_redux

    http://evans-experientialism.freewebspace.com/trehub01.htm

    Charles: “Now, one can easily imagine a computer programmed to analyze the output of a digital camera, identify which of a set of simple geometric shapes and basic colors are being “viewed” and where, and then synthesize the same verbal “description”. I envision a subject’s brain working something like that. But unlike the test setup, a subject’s brain additionally creates (for the subject) the phenomenal experience of a mental image of the figures – an arguably unnecessary processing capability.”

    There is an “elephant in the room” in this formulation that apparently you don’t see.

  60. 60. Arnold Trehub says:

    Peter, I’m having that “awaits moderation” problem again.

  61. 61. Peter says:

    It might help to ease off on the links, Arnold: you tend to insert similar links in your comments quite regularly and what’s happening is that the software thinks that looks like spamming.

  62. 62. Arnold Trehub says:

    So that’s what does it. My apologies to the software. I put the links in so that I don’t have to repeat long explanations. If Charles had already read the papers I linked, I guess the software was justified, Peter.

  63. 63. Vicente says:

    Charles:
    I don’t, nor do I even know what it would mean for a processor to have an “experience” as input – it certainly can’t literally observe one

    Right. But Physicalists, and dual-aspect monism followers, should accept it. According to their position, the experience, or the memory of the experience, has to be somehow coded, processed and maybe stored, as some neurological correlates of the (phenonemal) experience. So, once it is coded, why wouldn’t it be possible to use it as the input for a processor. It would be like recalling an extremely vivid and detailed memory, like having that same experience again. In addition, it would be necessary to check if there are neural paths in place to enable this input.

    But your intuition tells you it makes little sense, doesn’t it? Experiences are at present by definition, otherwise are memories, mostly innacurate.

    For this reason, I find so funny this initiative to freeze the brain (for the future!) that Peter has posted… we don’t really know how the brain codes information. Dynamic systems are described by their phase space, not by an instantaneous picture (as Peter wisely pointed out). These guys are going to prove that movement can’t be because at each instant the moving object is still. Fortunately for the advisory board there is a disclaimer stating that they don’t necessarily share the initiative’s ideas and opinions.

  64. 64. Charles Wolverton says:

    “There is an “elephant in the room” in this formulation that apparently you don’t see.”

    Perhaps, or maybe just some unstated assumptions, eg:

    1. The programmer quickly “teaches” the computer the set of identifiable shapes and colors against which the shapes extracted from the camera output are compared. We learn them via a long, tedious process of instruction.

    2. Our “processing” is analog, not digital. In particular, I think of the relevant neural networks (biological, not computational) as being analogous to so-called matched filters in signal processing that respond maximally to a specific input signal but also produce an attenuated response to signals that are in a relevant sense “close” to the signal to which the filter is matched. Hence, our ability to recognize (in comm system lingo, “detect”) neural activity patterns (consequent to sensory inputs) that aren’t exact duplicates of previously learned patterns (ie, are “noisy”).

    3. My example is grossly simplified because I’m just addressing a basic, primitive architectural question. So, I’m ignoring complexities like dealing with motion, binding, etc.

    Shall I keep guessing, or do I get a clue as to what the elephant is?

    In rereading the first of those two papers, I immediately encounter what may indeed be a terminology disconnect: “transparent brain representation”. First, I assume “representation” refers to any kind of structured information – numerical data, neural activity patterns, bit maps, etc – that could be mapped with some degree of fidelity back into the entity to be represented. And I assume that light reflected from an object onto a subject’s retina causes patterns of neural activity at various points in the processing chain. Then I would consider each such pattern to be a representation of the object, in particular patterns in retinoid space.

    But then I can’t quite grasp the idea that such patterns are “transparent” in the sense you state, viz, “not experienced as the activity of one’s brain” since I’m not sure how to interpret “experienced as”. Such neural activity is part of what constitutes input to the interpretive processors, so they do seem to “experience” the representations “as” neural activity in the brain. Of course, subjects don’t “experience” brain activity directly in the way that Rorty’s Antipodeans do – “Now I’m experiencing neural activity patterns P-497 and P-435 simultaneously, so that must be a red tomato” – but they experience mental images (what I mean by “visual phenomenal experience”) that in some sense “represent” the neural activity patterns and therefore are representations of representations.

    At which point I’ll stop and hope that you can clarify some of this for me.

  65. 65. Arnold Trehub says:

    Charles, you wrote: “Now, one can easily imagine a computer programmed to analyze the output of a digital camera, identify which of a set of simple geometric shapes and basic colors are being “viewed” and where, and then synthesize the same verbal “description”. I envision a subject’s brain working something like that.”

    Notice that the output of the required digital camera in your formulation is either a 2D image on an electro-photo screen or a table of digitized 2D addresses that the computer is supposed to “view” (take as input) and analyze. (1) What in the brain would correspond to the digital camera? (2) How could the “digital camera” in the brain possibly represent itself in perspectival relation (subjectively) to the scene represented by the camera? (3) Since the “digital camera”would have to be part of the brain, how would it represent itself as the point of *origin* within the global volumetric surround from which the 2D image was parsed? These functions would have to be performed *before* your computer could analyze its input. These are the very functions that are performed by phenomenal experience.

    To my knowledge, the brain’s putative retinoid system, with its particular neuronal structure and dynamics, is the only mechanism that can perform these functions. I know of no existing artifact that can do what the retinoid mechanism can. However, there is serious work now going on to better understand what might be required to build a retinoid mechanism. For example see:

    Kovalev, A. Visual Space and Trehub’s Retinoids. *OPTOELECTRONICS, INSTRUMENTATION AND DATA PROCESSING* Vol. 47 No. 1 2011.

    Kovalev, A. Stability of the Vision Field and Spheroidal Retinoids. *OPTOELECTRONICS, INSTRUMENTATION AND DATA PROCESSING* Vol. 48 No. 6 2012.

  66. 66. Charles Wolverton says:

    Arnold:

    My hypothetical “system” was intended to support an architectural interpretation. Let me once and for all state that I have neither the interest nor the ability to critique a detailed implementation, never mind to propose an alternative one. I accept your proposed implementation but often find your accompanying system-level statements confusing (owing, I think, to our attaching somewhat different meanings to key words) and am basically just playing devil’s advocate in an attempt to clear away that confusion. Addressing the second part of my comment 64 would be helpful in doing so.

    The camera in my hypothetical system is analogous to the eye, hence clearly isn’t “in the brain” (ie, computer).

  67. 67. Arnold Trehub says:

    Charles, I fully support playing devil’s advocate; it is an essential role in the enterprise of science.

    Charles: “Our ‘processing’ is analog, not digital. In particular, I think of the relevant neural networks (biological, not computational) as being analogous to so-called matched filters in signal processing that respond maximally to a specific input signal but also produce an attenuated response to signals that are in a relevant sense “close” to the signal to which the filter is matched. Hence, our ability to recognize (in comm system lingo, “detect”) neural activity patterns (consequent to sensory inputs) that aren’t exact duplicates of previously learned patterns (ie, are “noisy”).

    Yes, in my model of the cognitive brain, detection (recognition and classification) of pure or “noisy” patterns/signals is performed by arrays of filter-cells arranged as comb filters in what I call a synaptic matrix. These are the pre-conscious brain mechanisms in our sensory modalities. Conscious experience; as such, is just a global pattern of autaptic-cell activation in our egocentric retinoid space (subjectivity) and does not require matched filters. The detection of isolated patterns that are *parsed out* of our global conscious content (retinoid space) does require matched filters — the filter-cells of the synaptic matrix. For an example of this see “Self-Directed Learning in a Complex Environment” here: http://www.people.umass.edu/trehub/thecognitivebrain/chapter12.pdf

  68. 68. Charles Wolverton says:

    Arnold – I’ve started reading Chapter 1 of your book and have been slowed by interruptions, but now I should be able to spend some time on it. I already have some questions, but I’ll save them for now in case they get answered as I read. Stay tuned.

    Just out of curiosity, in looking at the transformations from the neural activity input vector space to the pattern or line output vector spaces, have you (or anyone else) found either matrix or coding theory applicable?

  69. 69. Charles Wolverton says:

    To see the motivation for that last question, consider the example detection scenario illustrated in Table 2.1. Instead of using weighted sums, one could use minimum (Hamming) distance decoding (equivalently, nearest neighbor (Hamming distance) pattern recognition). These can be viewed as a matrix operation on the input vector where the two rows of the matrix are the two learned vectors and the matrix multiplication of each row times an input vector V is done according to the rule Mi X Vi = 1 if the two elements are the same, 0 otherwise. Then the input vector is categorized based on which element in the resulting two element vector is larger.

  70. 70. Arnold Trehub says:

    Charles, I’m not knowledgeable enough about Hamming-distance decoding to make an intelligent comment about this kind of decoding within plausible neurophysiological constraints. But if you did apply such a decoding scheme, would the filter-cell activation levels correspond in ordinal values to those shown in Fig.3.3? If not, Hamming-distance decoding would not be optimal in a cognitive brain.

  71. 71. Charles Wolverton says:

    The Hamming distance between two vectors is just the number of components that differ. Eg, in your Table 2.1 the Hamming distance between each learned vector and itself is zero and the Hamming distance between the two learned vectors is four. In terms of logic operations, the Hamming distance is the result of doing an exclusive-or on each component pair and summing.

    I think this corresponds to something like assigning weight 1 not to each line that fires (ala Table 2.1, nonnormalized) but instead to each line that fires in the learning phase but doesn’t in the detection phase and vice versa. Then the occurrence of each learned vector would yield a score of zero for its class line, while the scores on other class lines would range from 2 (closest) to 7 (furthest). I’ll have to leave you to judge whether those operations are implementable within neurophysiological constraints.

    As I understand Figure 3.2, each line that fires is assigned the sum of the weights in the detection matrix, and the maximum sum wins. Doing this for the noisy input 010111000 (plus-sign with the bottom removed) results in sums for class lines 3 and 4 equal to 19 and 20 respectively, which represents a bias in favor of the latter. The Hamming distance between the noisy vector and each of the two learned vectors corresponding to those class lines is equal to 1, ie, is unbiased between them. In principle, ties would be broken randomly.

  72. 72. thoughts on thoughts » Blog Archive » Massive parallel processing says:

    [...] the ConsciousEntities site, there is a discussion of some of Dennett’s ideas (here). Near the end of the post there is a paragraph about parallel processing that I find [...]

  73. 73. RF says:

    Ergonomic mouse pads save you from a lot of
    musculoskeletal issues associated with wrists, palm and shoulder.
    This was in vogue for a long time and it is nonetheless developed as the most economical computer mouse.

    Feel free to visit my web site :: RF

  74. 74. AS says:

    I applied masking tape to generate two squares on the surface.
    This is believed to be the ordinary range of a wireless mouse even though.
    You have to be attractive, striking and entertaining at
    the exact same time.

    My homepage: AS

Leave a Reply