Picture: Astrocyte. Alfredo Perreira Jnr has kindly let me see an ambitious paper he and Leonardo Ferreira Almada have produced: Conceptual Spaces and Consciousness: Integrating Cognitive and Affective Processes (forthcoming in the International Journal of Machine Consciousness).

The unifying theme of the paper is indeed the integration of emotional and neutral cognitive processes, but it falls into two distinct parts.

The first, drawing on the work of Peter Gärdenfors, sets out the heady vision of a universal state space of consciousness.  Such a state space, as I understand it, would be an imaginary space constructed from a large number of dimensions each corresponding to one of the continuously variable aspects of consciousness. In principle this would provide a model of all possible states of consciousness, such that anyone’s life experience would form a path through some area of the space.

The challenges involved in actually populating such a theoretical construct with real data are naturally daunting. Perreira and Almada suggest that it could be approached on the basis of reported states of consciousness. An immediate problem is that qualia, the essence of subjective experience, are widely considered to be unreportable: Perreira and Almada meet this head on by adopting the heterophenomenology of Daniel Dennett: this approach (which I think implies scepticism about ineffable qualia) is based on studying phenomenal experience indirectly, through what subjects say about their own: the third-person perspective. Perreira and Almada note that Dennett adopted this stance mainly as a means of refuting first-person approaches, but I’m sure he would (or will) be delighted to hear of its being adopted as the explicit basis of serious research.  It’s implicit in this approach that we’re dealing with states that are capable of ‘inter-subjective validation’, that is, that they’re states which are accessible to all conscious entities.  This rules out objections on the grounds that, say, Andy having experience X is not the same as Bill having experience X, though in so doing it may appreciably impoverish the scope of the exercise. It could be that the set of experiences common to all conscious beings is actually a significantly restricted sub-set of the whole realm of conscious experience.  For that matter, can we afford to ignore the unconscious or the subconscious?  At times the borderline between conscious states and their near relations may be blurry.

I think two other worries are worth a mention. The state space model suggests that all trajectories are equally valid, but it seems unlikely that this is the case here. Consciousness is a stream, both emotionally and cognitively: certain kinds of state naturally follow other kinds of state. In fact, it doesn’t seem too much to claim that some states refer to previous states: we can’t repent our anger intelligibly without having first been angry. The business of reference, moreover, is a problem in itself.  We’ve already excluded the possibility of Andy’s anger being different from Bill’s, but we can also be in the same state of anger about different things, which seems a material difference. Me being angry about my tax return is not really the same state of consciousness as me being angry about receiving a parking ticket, though in principle the anger itself could be identical. Because, thanks to the miracle of intentionality, we can be angry about anything, including imaginary and logically absurd entities, this is a large problem. Either we exclude these intentional factors – and put up with a further substantial impoverishment of our state space – or the size of our state space balloons out infinitely in all directions.

The practical problems are not necessarily fatal, of course: it’s not as if Perreira and Almada were actually proposing a fully-documented description of the universal state space. What they do suggest is that if we assume another state space (wow!) corresponding to all the possible biophysical states of the brain, we can then hypothesise a mapping of points in one space to points in the other, which would give us the prize of a reduction of conscious experience to physical brain function.  Now I think a biophysical state space of the brain faces formidable difficulties of its own: for one thing we really don’t know exactly which biophysical features of the brain are functionally relevant; for another different brains are not wired the same way – and of course the sheer complexity of the thing is mind-boggling. The biophysical state space of a single neuron is a non-trivial proposition.

However, at a purely theoretical level, this is a nice rigorous statement of what the much-sought Neural Correlates of Consciousness might actually be. If we merely claim that there is a mapping between the two state spaces, we have a sort of rock-bottom version of NCCs, a possible statement of the minimum claim.  We would expect there to be some more general correspondences and matches between regions and trajectories in the two spaces – though I think it would be optimistic to expect these to be simple (and constructing the two state spaces and then observing the regularities would be a remarkably long way round to discovering correspondences between brain and mind activity). Still, the fact that these pesky NCCs turn out to be more abstract and problematic than we might have hoped is in itself a conclusion worthy of note.

All these heroic speculations are in any case just the hors d’oeuvres for Perreira and Almada: the state space of consciousness would have to represent emotional affect as well as rational cognition: how would that work?  They proceed to review a series of proposals for integration emotion and cognition. Damasio’s Somatic Marker Hypothesis, which has emotional affect deriving from states of the body is favourably considered, though criticised for elements of circularity. The alternative view that emotions come first avoids such problems but is ciriticised for not squaring with empirical evidence. Perreira and Almada suggest a better third alternative might be based on mapping the complex inter-relations of cognition and effect, and give a friendly mention to the oft-quoted Global Workspace theory of Bernard Baars. Now we can begin to see where the discussion is going, but first the paper brings in a new element.

This is a discussion of what actually makes mental states conscious – embodiment, higher order states, or what? Perreira and Almada look at a number of proposals, including Arnold Trehub’s retinoid system and Tononi’s concept of Phi, a measure of integrated information. Briefly, they conclude that something beyond all these approaches is needed, and something which puts the integration of affect and cognition at the heart of the system.

Now we come to the second major part of the paper, where Perreira and Almada introduce their own proposal: step forward the astrocytes.

Astrocytes are the most common form of glial cell, which are ‘the other brain cells’. Neurons have generally had all the glory in the past; historically it was assumed that the role of glia was essentially as packing for the neurons – in fact ‘glia’ is the Greek word for ‘glue’.  In recent years, however, it has become gradually clearer that glia, and astrocytes in particular, are more important than that. They form a second network of their own, across which ‘calcium waves’ are propagated.  I think it would be true to say that the standard view now sees astrocytes as important in supporting and modulating neural function, while still reserving the main functional signficance for all those showy synaptic fireworks that neurons engage in. Perreira and Almada want to give the neural and glial networks something like parity of esteem. The proposal, in essence, is that plain cognition is done by the neurons, while feelings are carried by large astrocytic calcium waves: only when the two come together does consciousness arise. Consciousness is the “astroglial integration of information contents carried by neurons”.

What about that?  It’s a bold and novel hypothesis (something we certainly need); it’s at least superficially plausible and has a definite intuitive appeal.  But there are objections.  First, there seem to be other established candidates for the role of feeling-provider.  We know that certain parts of the brain are required for certain kinds of affect – the role of the amygdala in producing ‘fear and loathing’ (or perhaps we should say ‘reasonable distrust and aversion’) has been much discussed. Certain emotions are almost proverbially (which of course is not to say accurately) associated with hormones and the action of certain glands. This needs to be addressed, but I don’t think Perreira and Almada would have too much difficulty in setting out a picture which accomodated these other systems.

More difficult I think are two more fundamental questions. Why would astrocytic calcium waves cause, or amount to, feelings?  And why would those feelings, when associated with cognitive information, constitute consciousness? Damasio’s and other theories can offer a clearer answer on the first point because it’s at least plausible that emotional states can be reduced to the pounding heart, the watering eyes, the churning stomach: calcium waves rippling across the brain somehow don’t seem as obviously relevant. And then, is it really the case that all conscious states have emotional affect? Perreira and Almada suggest that if neurons alone are involved (or astrocytes alone) all you get is proto-consciousness: but intuitively there doesn’t seem anything difficult about completely dispassionate but fully conscious thought.

One strength of the theory is that it seems likely to be more open to direct scientific testing than most theories of consciousness: a few solid experiments would probably relegate the kind of objection I’ve mentioned to secondary status. So perhaps we’ll see…

 

36 Comments

  1. 1. John says:

    A “state space” is a vague term that can be applied to the abstract mathematics of stock exchange movements, the possible input-output states of a control system, a Hilbert Space in QM etc…

    There can be no doubt that Perreira and Almada are right to call whatever happens in experience a “state space” but in doing so they are simply saying that it is physical. As you point out, the real philosophical problem is how the physical becomes experience but putting this to one side for the moment, if by using fashionable words like “state space” it is possible to get neuroscientists looking for the location of conscious experience again then it is a good thing. (Although we probably already know roughly where to look – see Where is conscious experience?)

  2. 2. Vicente says:

    Why would astrocytic calcium waves cause, or amount to, feelings?

    Does the paper refer to any experimental result that supports this idea?

    How come many drugs that have a clear effect on emotions (eg: anxiety), eg: benzodiacepines family, have an effect on neurons membrane receptors mostly GABA that modulates Cl channels among others. So far no role has been assigned to astrocytes in this sense, or not?

    I haven’t read the paper anyway. Just felt quite curious about this comment of yours, Peter.

  3. 3. Peter says:

    Vicente – the paper mentions some research by Tononi et al that seems to show that lack of connectivity during non-REM sleep (and the associated lack of consciousness, presumably) results from deactivation of the astrocyte network – though there are alternative interpretations.

    There’s quite a lot more detail in the paper, including the curious conclusion that the signals in the astrocyte network must be AM not FM, because FM needs a decoder. Unfortunately I think that issue of the journal is still not online yet.

  4. 4. Charles Wolverton says:

    “the signals in the astrocyte network must be AM not FM, because FM needs a decoder.”

    It’s not important, but as a (ex) comm guy I’m just curious: is that literally what was said? Both presumably need a de”M”odulator, and it would seem reasonable that both would need a “decoder”. But that one needs a decoder and the other doesn’t seems strange.

  5. 5. Vicente says:

    Charles, they probably refer to the fact that the posible signals are exchanged in terms of local time variation of extracellular medium Ca++ concentration (waves over larger areas), which can be thought as amplitud modulation. Astrocytes are sensible and can control surrounding electrolyte concentration, that would be equivalent to an AM modulation/demodulation, but they lack of any biophysical process that sense fast oscillation of the Ca++ concentration, they probably follow saturation/relaxation very fast with increasing frecuency, so they can’t perform FM demodulation, and extract information from the frequency variation.

    Neurons on the other hand are sensible to firing rates, FM radios !?

    I shouldn’t talk anymore about a paper I haven’t even read (although I trust Peter’s summary)for the sake of a best practices policy, but I have a hunch that I am not goint to like much once I get to read it.

  6. 6. Vicente says:

    he he sensible -> sensitive, sorry, well maybe individual neurons are sensible too, but that would deserve a blog page for itself.

  7. 7. John says:

    How a physical phenomenon could become experience is a deep mystery. The place to look for how FM or phase changes or AM could become experience is in experience itself, see Motion and Change.

  8. 8. Arnold Trehub says:

    John: “How a physical phenomenon could become experience is a deep mystery.”

    It seems to me that as soon as you think in terms of X (a physical event) *becoming* Y (a conscious event) you lock yourself into an explanatory gap. In terms of double-aspect monism, if X and Y are two different aspects of the same underlying reality, we can think of a particular kind of X-brain (under aspect-1) as *being* Y-conscious (under aspect-2). The scientific problem then becomes one of understanding the systematics of the relationship between aspect-1 and salient instances of aspect-2.

  9. 9. Charles Wolverton says:

    “extracellular medium Ca++ concentration (waves over larger areas)”

    Too many overlapping vocabularies. Since the topic was modulation, when I first read this I thought it was a reference to a modulation technique called code-division-multiple access (CDMA) used in “cellular” phone communication to spread the signals “over large areas” of the frequency spectrum – and assumed “Ca++” was the latest version of CDMA. (Like C+ in programming.) No wonder nobody knows what anyone else is talking about these days!

  10. 10. Vicente says:

    It seems to me that as soon as you think in terms of X (a physical event) *becoming* Y (a conscious event) you lock yourself into an explanatory gap. In terms of double-aspect monism, if X and Y are two different aspects of the same underlying reality, we can think of a particular kind of X-brain (under aspect-1) as *being* Y-conscious (under aspect-2). The scientific problem then becomes one of understanding the systematics of the relationship between aspect-1 and salient instances of aspect-2.

    So you are saying that consciousness itself is beyond the sphere of science, and scientists have to make happy themselves with relationships between observed brain states and observed behaviours or told testimonies, isn’t it.

    Then to talk about dual aspect monism or any other mythological, nigromantic middle age superstition explanation makes no difference.

  11. 11. Arnold Trehub says:

    Vicente: “Then to talk about dual aspect monism or any other mythological, nigromantic middle age superstition explanation makes no difference.”

    I’m not sure what you mean by the statement about dual-aspect monism making no difference. Difference in what respect?

    But knowing consciousness itself ( Y as such, the essential nature of Y) is no more possible than knowing the essential nature of the chair that you are sitting on. Science is not omniscient.

  12. 12. Vicente says:

    Arnold,

    What I mean is that if we accept that the ultimate explanation is metaphysical in the best case, why to appeal to nonsense like dual aspect monism, just bullshit. If we don’t know yet, we don’t know yet.

  13. 13. John says:

    Arnold: “In terms of double-aspect monism, if X and Y are two different aspects of the same underlying reality, …”

    Is there really any difference between the events in my experience and the events in the world? What I have is arrangements of events; events laid out in space and time. I can cunningly relate these arrangements of events to what I suspect are events outside of my experience and build a consistent computational system called “science” from these relations. My science tells me that there is a vast universe with events in wide variety of arrangements.

    So, can I directly relate events in a measuring instrument to events in my experience – can I incorporate my experience into science? Very likely. If my experience were, for instance, an electric field I could stick an electrode in the appropriate place and an event would pop up directly in my experience. The event in my experience would be what an electric field actually IS at the time and place that it occurs. The voltmeter attached to the electrode would give me information about the field that could be used in my “science” but “volts” or “volts per meter” are only relations of the field, the green or blue or throb etc. in my experience would be the actual field. The electric field would be an event in my experience. This event might be a zone of “green” but the green is itself, not a relation.

    This all suggests that the main difference between events in my experience and events in the world at large is largely a matter of whether the events are directly in my experience or not. If the events are not directly in my experience all that I know about them is “information” about these events and this information eventually produces arrangements of events that I actually can experience.

    There is no room in any of this for “dual aspects”. I am a small set of events somewhere in my brain. These events can be related to events elsewhere and the consistent body of relations that this generates is called “science”. Relationships between events are not events themselves so the collection of these relationships that we call “Science” is not reality, it is a description of reality. Reality is my experience plus the events that I am pretty sure are out there but not directly within my experience.

  14. 14. Arnold Trehub says:

    John,

    Thanks for your probing comment. You wrote:

    “There is no room in any of this for ‘dual aspects’. I am a small set of events somewhere in my brain. These events can be related to events elsewhere and the consistent body of relations that this generates is called “science”. Relationships between events are not events themselves so the collection of these relationships that we call ‘Science’ is not reality, it is a description of reality. Reality is my experience plus the events that I am pretty sure are out there but not directly within my experience.”

    If I understand you correctly, it seems to me that we are using different words to describe the same situation. You say “… ‘Science’ is not reality, it is a description of reality.” and “Reality is my experience [Subjective] plus the events that I am pretty sure are out there but not directly within my experience.” [Objective] So in the science of consciousness you have descriptions of a subjective domain (experiencing from inside a brain) and descriptions of an objective domain (e.g., experiencing a brain from the outside). Would you agree that the relevant vocabularies of the two domains are significantly different? Would you also agree that the two domains are a part of an all-enveloping monistic domain, an underlying physical reality called the real world? If so, the difference between events in your experience and events in the world lies in the difference between the vocabularies we use to describe the subjective and the objective. These two vocabularies necessarily arise from the two different perspectives/aspects on/of the real world. It seems to me that double-aspect monism is a succinct phrase for capturing this state of affairs.

  15. 15. John says:

    Hi Arnold,

    You said: that I have “a subjective domain (experiencing from inside a brain) and descriptions of an objective domain (e.g., experiencing a brain from the outside)”

    I would say that I have experience and this can contain content that I believe may be related to external events. There is a set of events that I call “my mind” that are arranged in time and space and these include events such as inner speech or a view of someone’s grey matter through a flap in their skull.

    I extrapolate from “science” that there are events that are not contained in my experience. So if there is any dual aspect it is between the observed and the currently unobserved.

    I only have one “perspective”, my current experience. Those who imagine their own experience in terms of looking at some grey matter in the lab and probing into it to find electrical signals or other correlates of experience are missing the essential point that all they are gathering is information derived from measurements. They will end up asking apparently paradoxical questions such as “how can a pattern of nerve impulses be pain?”.

    I will tell them how to resolve this question, stick the electrode in your own skull! The “how can a pattern of nerve impulses be pain?” question is naive realist. What is being asked is how can a pattern of impulses “over there” be my experience – obviously it cannot. If I stick the electrode in my own brain I will immediately know that there is “pain” and the question becomes “how does pain create the observed correlates in measuring instruments?”. After all, we know the pain, it is clearly a more complex physical entity than a voltage measurement so the only unknown is the transform between this pain and our simple measurements.

    Observation is the placing of events that are derived from measurements within the geometrical manifold of my experience. The measurements themselves are just changes in state in measuring instruments such as rulers or eyes and these become observations once they are given a location and a motion within my mind so that they can be instantaneously related to the other events in my experience (measurements alone, such as columns of numbers in a spreadsheet, do not have this attribute). I can never observe a measurement directly but my brain can process the data from a measurement to produce an observation that is related to the measurement. I still cannot see this in terms of any dual aspect. If I imagine an electric field in someone else’s brain I am imagining a set of measurements – for instance I might imagine a sort of cloud enveloping part of their brain like the colours on an fMRI scan. However, if the electric field were truly responsible for experience I would be wrong to imagine the field as a set of measurements, the field itself would actually be pain or green or hot within my own geometrical manifold.

  16. 16. Charles Wolverton says:

    I suspect that John is expressing yet again – though from a somewhat different perspective – the general theme that Arnold and Tom express – also each from his own perspective – that talking about the body and talking about the mind are fundamentally different and therefore require the use of different vocabularies. In the case of the body, there are actually multiple vocabularies that bring “salience” to different functions of interest, eg, the vocabularies of neurophysiology, biology, anatomy. In the case of the mind, there is the vocabulary of psychology, in particular the “intentional idiom” (Brentano sense).

    At the risk of being tedious, I will yet again recommend the Ramberg essay in “Rorty and His Critics” as an extremely insightful discussion of the relationship between those vocabularies. I’m not suggesting that Ramberg is “right” in what he says – that’s way beyond my ability to argue – just that it clarifies (or at least has for me) several issues about that relationship that I have found elusive: reducibility and translatability of vocabularies ala Quine; the role of Davidson’s “principle of charity” in interpreting a language; the relationships among language, the mind, and the concept of “truth”; et al. (And all in a mere 17 pages – although for me, incredibly dense pages.)

    I am not capable of summarizing the essay even were I willing to try, but the bottom line as I understand it is that talking about the mind is fundamentally different from talking about other parts of the world, so that:

    – concepts like “reducibility and translatability are inapplicable

    – there is a fundamental problem with intersubjective agreement (required in order to be able to talk about anything) when one tries to talk about the mind (in essence, there is an infinite regress problem – agreeing among a group of minds on how to talk about a mind)

    – there is a crucial role for a concept of “truth” in human communication (in his response, Rorty accepts this, thereby relenting somewhat in his long-standing “anti-truth” campaign)

    Over and out.

  17. 17. Arnold Trehub says:

    John,

    You wrote:

    “I extrapolate from “science” that there are events that are not contained in my experience. So if there is any dual aspect it is between the observed and the currently unobserved.”

    This is a tricky business. I agree that there is a difference between the observed and whatever isn’t observed. But what isn’t observed has *no aspect* for us. Dual-aspect monism refers to the difference between what is *directly experienced* (subjective brain content, aspect-1) and what is *observed via our sensory transducers* (objective brain content, aspect-2). The scientific problem is to formulate a brain theory that can explain and predict subjective brain content (aspect-1) on the basis of objective brain content (aspect-2). In my view, these two aspects of the brain are in two different descriptive domains that have to be bridged on the basis of corresponding analogs.

    For example, look at Fig. 1 here:

    http://journalofcosmology.com/Consciousness130.html

    An objective representation of a brain could be an abstracted description or depiction of E1* in your phenomenal world; but your subjective experience of E1* is a direct, unanalyzed event, an integral part of the plenum of your phenomenal surround in retinoid space. The subsequent analysis/abstraction of your subjective experience of E1* is performed by various kinds unconscious brain mechanisms (see Fig. 3) which then lead to actions that others can observe, as well as to the inner speech and images you might experience in thinking about the brain and consciousness.

  18. 18. John says:

    Charles: “…talking about the body and talking about the mind are fundamentally different and therefore require the use of different vocabularies”

    Arnold: “In my view, these two aspects of the brain are in two different descriptive domains that have to be bridged on the basis of corresponding analogs.”

    I am actually saying something very different. Suppose experience were due to an electric field, we could apply a voltmeter repeatedly to the field to draw a map of the potential at any point. The question “how could experience arise from the field” appears to be the question of how this map of potentials could be “experience” but this is ridiculous. The map is information (a physical state) derived from the information (state) on the voltmeters, it is only distantly related to the true nature of an electric field. The “objective representation” of the field is simply states that are related in some highly specific ways to the real field, it is not the field itself.

    To maintain that there is some sort of dualism between the real field and information derived from measurements is like saying that these four letters “tree” are a valid alternative physical reality to a real tree. The letters are a state on the paper that is distantly related to the real physical “tree”, not the tree itself.

    There is only one set of physical objects that are intimately and directly my experience (some set of events in my brain) all other physical objects are only inferred through a chain of state changes culminating in a state change in my experience. There is no dualism or dual aspect; there are events and information (events induced by events – ie: state changes). In fact to confuse information derived from events with events themselves is almost a definition of naive realism.

  19. 19. John says:

    Arnold: “An objective representation of a brain could be an abstracted description or depiction of E1* in your phenomenal world; but your subjective experience of E1* is a direct, unanalyzed event, an integral part of the plenum of your phenomenal surround in retinoid space.”

    I almost agree with this but would write:

    “Information about a brain could be an abstracted description or depiction of E1* in your phenomenal world; but your subjective experience of E1* is a direct, unanalyzed event, an integral part of the plenum of your phenomenal surround in retinoid space.”

    The information is “objective” information but it is not the original events that it describes, merely a state in a machine or on paper that is related to the original events.

  20. 20. Arnold Trehub says:

    Arnold: “An objective representation of a brain could be an abstracted description or depiction of E1* in your phenomenal world; …”

    John: “I almost agree with this but would write:”
    “Information about a brain could be an abstracted description or depiction of E1* in your phenomenal world; …”

    In this context, what is the essential difference between “an objective representation of a brain” and “information about a brain”?

    In our earlier exchange on your blog New Empiricism, under *There is no Information without representation*, you wrote:

    “I agree with your part statement that: ‘.. from the objective 3rd-person perspective, we might experience the kinetics of this mechanism from the outside as a special pattern of neuronal activity in a brain; from the subjective 1st-person perspective, we experience the activity of this special mechanism from the inside as manifest information.'”

    It seems to me that (a) “an objective representation of a brain in the phenomenal world” and (b) “information about a brain in the phenomenal world” are equivalent. If they are not equivalent, how does (manifest) *information* differ from the spatiotemporal pattern of neuronal activity in the Z-planes of the retinoid system?

  21. 21. John says:

    We are probably in agreement. What I was trying to emphasise was that there is a naive realist version of dualism based upon the apparent impossibility of a train of nerve impulses being “pain” or “green”. The proponent of this dualism takes a model of a phenomenon in the brain such as an electric field and then declares that this model could not possibly be the same as experience.

    This apparent dualism is often tackled by pointing out that there are two different perspectives for viewing physical phenomena in the brain: viewing from the outside or viewing from the inside. I would deny that there is any “viewing from the outside” and would rather tackle this apparent problem by pointing out that in the world where the field resides our only knowledge of the field is information derived from the field eg: events that occur as a result of interactions with the field. The events that constitute the field themselves will always be unknown to us. Furthermore the information from the field only allows us to deal with the relationships between events. As an example, if I draw a contour map of field strength on a sheet of paper I have relationships between potentials in space but if I consider the term “potential” I am forced back on the relationship between sheets of gold foil in an electrometer and if I think of electrostatic force I convert it to the opposing force of gravity on the raised gold leaf etc. etc.

    So if I understand the nature of information I realise that there is no “exterior view” of a field, there is only the model map of potentials in my own experience, in my brain, that I have created on the basis of data from instruments. In fact to declare that there is an “exterior view” which has apparently intractable problems such as how a train of nerve impulses could become pain is a form of naive realism that leads the unwary to postulate dualism.

    If I have pain then there might be nerve impulses that correlate exactly with the distribution of this pain in experience so the readings on voltmeters may indeed demonstrate that the substrate of the pain is a field. It might then be possible to lessen the strength/phase/frequency.. of the field through some physical intervention to lessen the pain. So gaining information about the field is very useful, however, the pain may well be the field but neither the pain nor the field are readings on a voltmeter or contour maps or equations of relations, there is no “exterior view”.

    This argument also applies to simpler cases of naive/direct realism such as confusing the model of the measurements performed by my eyes with the actual events in the world.

  22. 22. John says:

    PS: when I said “The events that constitute the field themselves will always be unknown to us.” I should have said “The events that constitute the field themselves will always be unknown to us unless this field is our own experience itself”.

  23. 23. Arnold Trehub says:

    John: “I would deny that there is any ‘viewing from the outside’ …”

    I agree with your concern about the intellectual error of direct realism, but I am puzzled by your rejection of dual-aspect monism. If you believe (as I do) that a real physical-monistic world (say W-1) actually exists independently of any phenomenal representation of such a world (say W-2), and if you believe that much of the contents of W-2 depends on directing sensory transducers at parts of W-1, then it seems to me that you accept the notion of “viewing from the outside”.

    Consider this example: Many years ago I routinely conducted electrophysiological experiments in which I measured neuronal activity at the tip of micro-electrodes that were implanted in particular stereotaxic coordinates of exposed rat brain, and also stimulated the brain at these sites through micro-electrodes. If you were to watch me conduct the experiment, you could properly say that I was looking at the rat’s brain *from the outside*, and I could say that you were viewing me looking at the rat’s brain *from the outside*. At the same time, each of us would be *experiencing* the same physical situation *from inside* our own brains. As for the rat, it could see part of the experimental setup from the outside, and while we cannot *know* what the rat’s phenomenal experience was, the electrical stimulation systematically induced appetitive behavior that suggested that the brain stimulation was experienced from inside its brain with a particular kind of affect. It seems to me that these are clear example of dual-aspect monism.

  24. 24. John says:

    Arnold: ..”you could properly say that I was looking at the rat’s brain *from the outside*”

    I agree that this mode of expression would be used by anyone viewing the experiment because naive realism is the conventional mode of describing the world.

    Your actual experience was a physical substrate in your brain such as the retinoid system and the colours, smells and textures were disturbances of this substrate that were only distantly related to the oscillating fields of photons, organic molecules and Van der Waals forces of the physical brain. In fact the events in your experience were related to relations of the physical events rather than the events themselves. Senses and Science only provide relations.

    The rat’s brain in your experience had the form of a component of a “view” and it is this form that it is difficult to imagine could ever be a “mind” but the brain itself could very well have had a little mind within it.

    Returning to the “dual-aspect”, the physical substrate of the rats brain was not actually “seen” at all. Even the three dimensional slice accessible to instruments was not actually “seen”. What was experienced was a model in your experience composed of some strange vectors directed at a point that were false colour, texture and smell. Those who have difficulty imagining how this model of, or geometrical theory about, a brain could become a mind are confusing the model with physical reality.

  25. 25. Charles Wolverton says:

    The problem with Arnold’s description in comment 23 that John addresses in comment 24 (as I mis?-read him) is essentially the infinite regress to which I referred in the next-to-last bullet in comment 16.

    That problem is also the reason I am not a fan of the “subjective 1pp-objective 3pp” terminology. As John points out (my understanding of “naive realism”), the 3pp scientific view on the world is no more “objective” than the 1-pp view of the narrow slice of the world called “my mind”. As I suggested in a comment on the “Ephaptic Consc” thread, what distinguishes the 3pp scientific view is its ability to increase the apparent reliability of a concensus opinion within a community about observation reports of commonly accessible events by following a certain methodology. One can’t employ that methodology in the case of 1pp observations. But Sellars suggested the possibility of an alternative methodology for that case (at least I mis?-read him as doing this).

    I see Arnold, John, and Tom as suggesting candidates for such a methodolgy. And I keep harping on terminology because in the absence of a common vocabulary, I don’t see how approaches can even be compared to see where they agree and disagree, never mind how a consensus position can be reached.

  26. 26. John says:

    Charles, I agree with much of your comment, especially “the 3pp scientific view on the world is no more “objective” than the 1-pp view of the narrow slice of the world called “my mind”

    In my previous comments I have used the term “model” to stress the way that what is occurring in the world is not what occurs in our experience. In particular “views” are a property of experience, they are not found in measuring instruments and a volume of rat’s brain is not a “view”. In fact “views” are so unique to the mind that I believe that the mystery of experience is the mystery of how our experience can be a “view” (See Time and conscious experience). A “view”, whether it is in the scientist’s or the rat’s brain is a first person phenomenon. If a scientist declares that he is especially objective because he only considers the models in his experience as if they were the actual things on the lab bench then the scientist is a naive realist.

    There is no third person experience, only naive realism.

    You mentioned “regress” arguments, these are a result of another foible that is the brother of naive realism: presentism. (See Presentism and the denial of mind)

  27. 27. Arnold Trehub says:

    John and Charles,

    John: “A ‘view’, whether it is in the scientist’s or the rat’s brain is a first person phenomenon. If a scientist declares that he is especially objective because he only considers the models in his experience as if they were the actual things on the lab bench then the scientist is a naive realist.
    There is no third person experience, only naive realism.”

    It seems to me that the hang-up here has to do with a confounding of a philosophical notion with scientific practice.

    1. First, I fully agree that there is no 3rd-person experience. There can only be 1st-person experience.

    2. However, the practice of science crucially depends on *descriptions* of experience. In particular on *interpersonal* consensus on descriptions of experience.

    3. A science of consciousness depends on the relationship between two different descriptive domains of experience, (1) descriptions taken from the subjective 1st-person domain — not subject to interpersonal consensus, and (2) descriptions taken from the “objective” 3rd-person domain — subject to interpersonal consensus.

    4. “Looking at the brain from the outside” is simply to suggest an interpersonal (3pp) descriptive perspective, whereas “looking at the brain from the inside” is to suggest a subjective (1pp) descriptive perspective.

    5. It seems to me that the tag of “naive realism” refers to a state of belief. A scientist, as a matter of practice, can treat his measuring instruments as real objects just as they appear to him, and expect others to treat these objects similarly, yet at the same time hold the belief that his experience of these same objects is only a subjective construction/model of what is “out there”.

    6. A naive realist believes that he/she has direct knowledge of the world and everything is really just as it appears to be. Scientists that I know don’t hold this belief.

  28. 28. Vicente says:

    John,

    regarding cmmt #26, could you please tell us your definition of “objective”, and the difference between “objective” and “subjective”.

    General consensus (scientific?):

    Concise Oxford English Dictionary © 2008 Oxford University Press:

    objective/?b??ekt?v/?

    adjective

    1) not influenced by personal feelings or opinions in considering and representing facts.

    2) not dependent on the mind for existence; actual.

    No wonder some keep harping on terminology because in the absence of a common vocabularyI they don’t see how approaches can even be compared to see where they agree and disagree, never mind how a consensus position can be reached.

  29. 29. John says:

    Arnold, I agree with five your six points but point (4) seems slightly problematical because of its implications:

    “4. “Looking at the brain from the outside” is simply to suggest an interpersonal (3pp) descriptive perspective, whereas “looking at the brain from the inside” is to suggest a subjective (1pp) descriptive perspective”

    The implication is that there is a real brain out there that has the same properties as the geometrical model in each of the scientist’s minds. The real brain is probably a horrendously complex phenomenon lurking in some QM space and the “view” that appears within it may be due to an emergent space-time that has peculiar geometrical properties – or it may be something different – but whatever it is it is not the shared model.

    Vicente, I suspect that Arnold and I are surprisingly close in our understanding and I did actually agree with all six of his points but brought up the implication of the brain as it appears in a view being like the physical brain because he was not denying it explicitly…though I was probably splitting hairs. I would dispense with objective and subjective and introduce “shared measurements” (using instruments), “observations in common” (using the models in our minds) and “private observations”. Using these definitions we can talk about “views” as observations in common even though we know that there are no views actually out there – a view is a mental phenomenon. We can also be confident that shared measurements are likely to be information transferred from events that are outside of our minds and hence related to real physical properties. Non-shared, private observations refer mainly to the private content of my mind.

  30. 30. Charles Wolverton says:

    John –

    It’s unclear to me whether the regress to which I refer (roughly as described below) is related to anything in your presentism post.

    When a community of scientists want to predict the behavior of a system (a brain for example), they implement a version of what Donald Davidson calls (in the context of interpreting the utterances of a foreign language speaker) “triangulating”. They propose a model of a brain and compare observed and predicted behaviors. The triangulating is the process of reaching intersubjective (interpersonal) agreement within the community as to whether the observations and predictions are in sufficient accord to “verify” (in some sense) the model. Variations in the functioning of each individual “mind” in the community are “averaged out”. Implicit in this process (as I understand it) is the assumption that with a large enough community of minds, each assumed generally to function in accordance with certain standards, the averaging will result in the process being sufficiently reliable to achieve relevant goals.

    However, when the system under investigation is the “mind” itself, the “pre-triangulation” assumption of a community of more-or-less homogenously functioning minds has to be dropped since how a mind functions is the question to be answered. But then the community would need to triangulate with respect to the behavior of each member’s mind, which leads to the infinite regress.

    (This is my take on Ramberg’s interpretation (in the essay to which I keep referring) of a passage from Davidson that I found especially difficult – as did Rorty. I may or may not have done justice to it, the latter being much more likely.)

    “we can hear whole words stretched through time rather than individual phonemes” (from your presentism post)

    As an OT aside, I speculate that this can be taken a step further: in formulating responses to speech (at least in quotidian conversation), we learn to process whole phrases and short sentences. This is just a guess based on nothing more substantive than the observation that verbal responses often emerge with a speed that seems incompatible with the “parse the verbal input, look up meanings, formulate a response” model that I had always assumed. It also seems to fit better with Wittgenstein’s idea that the “meaning” of a word is determined by its use in a context (“language game”).

  31. 31. John says:

    Charles: “But then the community would need to triangulate with respect to the behavior of each member’s mind, which leads to the infinite regress. ”

    Could you explain how this triangulation results in a “regress”? Infinite regresses and recursions occur where starting or ending conditions in a process cannot be defined. In an infinite regress an output requires an input that requires a further input of the same type etc. but a sufficient input can never be found, in an infinite recursion that which produces an output acts upon it again endlessly with no end point that satisfies the criteria for a solution. Presentism, in which only the present instant is believed to exist, leads to an infinite recursion because a person can only know anything by the motion of material in processes but at each instant there is no motion so the person can never know anything (the instant being all that is available in presentism). There is no defined end point to the processing because the processing can never create the “knowing” that is its objective.

    You seem to be using a presentist notion of reality when you say, as an explanation for the “specious” present that: “we learn to process whole phrases and short sentences”. If the present is no more than the boundary between the future and the past and this boundary is 0 seconds long then at any moment nothing is being processed inside you. You cannot hear a word that lasts a second as a whole word in no time at all.

    The triangulation that you describe does not seem to be a serious obstacle to the description of mind. I cannot see how, for instance, the “view” that we all experience gives rise to a regress simply because we all experience it.

    Presentism is the reason that there is no progress in the philosophy or science of conscious experience, 99% of philosophers and neuroscientists have an absolutely unshakeable belief in reality as a succession of 3D frames, time being no more than some sort of count of the passing frames. Even when it is pointed out that when you hear the word “now” the “now” is in the past, or point out that if you existed for no time at all you would not exist or describe evidence for time as an observable in quantum physics or the necessity for the existence of time in relativity the philosopher or neuroscientist will still just shake this stuff out of their head and return to presentism. Religious movements have seldom had adherents with such unshakeable faith as the faith of philosophers and neuroscientists in presentism.

  32. 32. Charles Wolverton says:

    John –

    re infinite regress:

    In triangulation, a participant (corner 1) in the community takes as input to their mental processing observations about some set of events, constructs a description of them, then compares that description with the descriptions of others in the community (corner 2) in an attempt to reach some level of agreement on a common description which is felt to be in accord with their perceptions of that part of “the world” relevant to the observed events (corner three).

    But when the events are mental, the mental processing of each participant becomes part of the observation set, so the community must triangulate again in order to reach agreement on how to describe those mental events in the case of each participant. At which point the process has entered the regress.

    I’m afraid that’s the best I can do. Even after a long struggle, I have a somewhat tenuous grasp on Davidson’s concept. But since Rorty struggled with it as well, that’s not surprising. Even his post-Aha!-moment restatement of Ramberg’s restatement seemed to me a bit obscure.

    re “specious present”:

    I think you misunderstood me – I was agreeing with you that language processing extends over time and suggesting that it might extend over even more time than just phonemes or words. I have no opinion on “presentism” – although your critique seemed reasonable.

  33. 33. Charles Wolverton says:

    Beating an apparently dead horse …

    … it has just occurred to me that the “triangulation” metaphor – which makes sense in Davidson’s scenario of two speakers of different languages trying to match their utterances by relating them (typically by pointing to an entity in the environment while uttering their respective names for that entity – AKA “ostensive learning”) should perhaps be replaced in the current context (justifying beliefs) by the metaphor of a polyhedron the corners of which comprise the members of the relevant community plus that part of “the world” about which the community is attempting to construct beliefs. Then “triangulating” (polyhedronating?) is the process of trying to reach consensus beliefs that are perceived by most members of the community as being in accord with mutual observations about that part of “the world”.

    With this metaphor, the issue about applying “triangulating” to the part of “the world” that is called “minds” (vehicles of beliefs) is simply that the “world” corner then subsumes each of the other corners of the polyhedron – the “mind” of each member of the community. Whether this potential problem with proper functioning of the process is better described as an “infinite regress”, a “collapse of the process on itself”, or some other way is rather a matter of taste.

  34. 34. John says:

    Charles, if we abandon naive and direct realism and accept that “ostensive learning” involves phenomena that are the contents of the virtual reality we call “mind” then the fact that ostensive learning is possible proves that the regress argument that you describe is false. When I have a “chair” in my experience and say “chair” to a non-english speaker they can indeed connect the word with their own experience that contains a form that can be labelled “chair”. People do, routinely, compare mental phenomena such as the form of chairs without any regress.

    Your example is apposite because it demonstrates how the “view” that is our experience is confused with the geometry of events outside our brains. The naive realist concept of a “view of a chair” has no geometrical or physical basis outside of the brain. If you trace the light from the chair it falls all over both corneas to be redirected into two differing patterns of light on the two retinas. There is no conical geometrical form out there, between the head and objects in the world that allows things to be seen simultaneously at some imaginary point on the bridge of the nose! The “view” is a special geometrical form synthesised by the brain that we call our “mind” or “conscious experience” (see Time and conscious experience).

  35. 35. Charles Wolverton says:

    John –

    I agree that two would-be communicators can reach agreement on a vocabulary for discussing 3pp-accessible objects like chairs or events. But the problem (regress or not) arises for a community trying to reach agreement on a vocabulary for discussing entities that are not 3pp-accessible objects, in particular mental events.

    I’ve been trying to formulate a better description of the issue, but after several days still don’t have a satisfactory one. I’ll have limited or no access to a computer for the next few days, but will try again after that. In the mean time, you might think about what it is that makes constructing a vocabulary for discussion of mental events so much harder than a constructing one for discussion of chairs. (Or perhaps give Ramberg’s essay in “Rorty and His Critics” a try.)

  36. 36. John says:

    Charles:”But the problem (regress or not) arises for a community trying to reach agreement on a vocabulary for discussing entities that are not 3pp-accessible objects, in particular mental events.”

    I was stressing the fact that there is a set of mental events that we can indeed agree upon: the axes for arranging events in our experience. Sure, we cannot agree on the exact content of our experience but we can agree that we both have a view containing the screen as we read this post, if we have a good visual imagination we can both agree that we can imagine a view like the view containing this screen, if we dream of the screen we can both agree that we dream of a view etc. We can agree on the general arrangement of mental events. Furthermore, when we try to analyse this arrangement, this view, we will discover that it is very strange because it has the characteristics of a 4 dimensional spacetime.

Leave a Reply