This town ain’t big enough…

boxers…for two theories?

Ihtio kindly drew my attention to an interesting paper which sets integrated information theory (IIT) against its own preferred set of ideas – semantic pointer competition (SPC). I’m not quite sure where this ‘one on one’ approach to theoretical discussion comes from. Perhaps the authors see IIT as gaining ground to the extent that any other theory must now take it on directly. The effect is rather of a single bout from some giant knock-out tournament of theories of consciousness (I would totally go for that, incidentally; set it up, somebody!).

We sort of know about IIT by now, but what is SPC? The authors of the paper, Paul Thagard and Terrence C Stewart, suggest that:

consciousness is a neural process resulting from three mechanisms: representation by firing patterns in neural populations, binding of representations into more complex representations called semantic pointers, and competition among semantic pointers to capture the most important aspects of an organism’s current state.

I like the sound of this, and from the start it looks like a contender. My main problem with IIT is that, as was suggested last time, it seems easy enough to imagine that a whole lot of information could be integrated but remain uniluminated by consciousness; it feels as if there needs to be some other functional element; but if we supply that element it looks as if it will end up doing most of the interesting work and relegate the integration process to something secondary or even less important. SPC looks to be foregrounding the kind of process we really need.

The authors provide three basic hypotheses on which SPC rests;

H1. Consciousness is a brain process resulting from neural mechanisms.
H2. The crucial mechanisms for consciousness are: representation by patterns of firing in neural populations, binding of these representations into semantic pointers, and competition among semantic pointers.
H3. Qualitative experiences result from the competition won by semantic pointers that unpack into neural representations of sensory, motor, emotional, and verbal activity.

The particular mention of the brain in H1 is no accident. The authors stress that they are offering a theory of how brains work. Perhaps one day we’ll find aliens or robots who manage some form of consciousness without needing brains, but for now we’re just doing the stuff we know about. “…a theory of consciousness should not be expected to apply to all possible conscious entities.”

Well, actually, I’d sort of like it to – otherwise it raises questions about whether it really is consciousness itself we’re explaining. The real point here, I think, is meant to be a criticism of IIT, namely that it is so entirely substrate-neutral that it happily assigns consciousness to anything that is sufficiently filled with integrated information. Thagard and Stewart want to distance themselves from that, claiming it as a merit of their theory that it only offers consciousness to brains. I sympathise with that to a degree, but if it were me I’d take a slightly different line, resting on the actual functional features they describe rather than simple braininess. The substrate does have to be capable of doing certain things, but there’s no need to assume that only neurons could conceivably do them.

The idea of binding representations into ‘semantic pointers’ is intriguing and seems like the right kind of way to be going; what bothers me most here is how we get the representations in the first place. Not much attention is given to this in the current paper: Thagard and Stewart say neurons that interact with the world and with each other become “tuned” to regularities in the environment. That’s OK, but not really enough. It can’t be that mere interaction is enough, or everything would be a prolific representation of everything around it; but picking out the right “regularities” is a non-trivial task, arguably the real essence of representation.

Competition is the way particular pointers get selected to enter consciousness, according to H2; I’m not exactly sure how that works and I have doubts about whether open competition will do the job. One remarkable thing about consciousness is its coherence and direction, and unregulated competition seems unlikely to produce that, any more than a crowd of people struggling for access to a microphone would produce a fluent monologue. We can imagine that a requirement for coherence is built in, but the mechanism that judges coherence turns out to be rather important and rather difficult to explain.

So does SPC deliver? H3 claims that it gives rise to qualitative experience: the paper splits the issue into two questions: first, why are there all these different experiences, and second, why is there any experience at all? On the first, the answers are fairly good, but not particularly novel or surprising; a diverse range of sensory inputs and patterns of neural firing naturally give rise to a diversity of experience. On the second question, the real Hard Problem, we don’t really get anywhere; it’s suggested that actual experience is an emergent property of the three processes of consciousness. Maybe it is, but that doesn’t really explain it. I can’t seriously criticise Thagard and Stewart because no-one has really done any better with this; but I don’t see that SPC has a particular edge over IIT in this respect either.

Not that their claim to superiority rests on qualia; in fact they bring a range of arguments to suggest that SPC is better at explaining various normal features of consciousness. These vary in strength, in my opinion. First feature up is  how consciousness starts and stops. SPC has a good account, but I think IIT could do a reasonable job, too. The second feature is how consciousness shifts, and this seems a far stronger case; pointers naturally lend themselves better to thus than the gradual shifts you would at first sight expect from a mass of integrated information. Next we have a claim that SPC is better at explaining the different kinds or grades of consciousness that fifteen organisms presumably have. I suppose the natural assumption, given IIT, would be that you either have enough integration for consciousness or you don’t. Finally, it’s claimed that SPC is the winner when it comes to explaining the curious unity/disunity of consciousness. Clearly SPC has some built-in tools for binding, and the authors suggest that competition provides a natural source of fragmentation. They contrast this with Tononi’s concept of quantity of consciousness, an idea they disparage as meaningless in the face of the mental diversity of the organisms in the world.

As I say, I find some of these points stronger than others, but on the whole I think the broad claim that SPC gives a better picture is well founded. To me it seems the advantages of SPC mainly flow from putting representation and pointers at the centre. The dynamic quality this provides, and the spark of intentionality, make it better equipped to explain mental functions than the more austere apparatus of IIT. To me SPC is like a vehicle that needs overhauling and some additional components (some of those not readily available); it doesn’t run just now but you can sort of see how it would. IIT is more like an elegant sculptural form which doesn’t seem to have a place for the wheels.

54 thoughts on “This town ain’t big enough…

  1. A fight between IIT and SPC seems like a fight between whether coffee grounds or water is necessary to make a cappuccino. Both seem necessary but insufficient. Neither seems to get at inner experience. I think Graziano was right in the article you highlighted in your previous post: metacognition, thinking about thinking, needs to be accounted for.

  2. From the referenced paper:
    “A semantic pointer is a special kind of neural representation – pattern of firing in a population of neurons – that is capable of operating both as a symbol and as a compressed version of sensory and motor representations.”

    “According to H2, the third mechanism of consciousness is competition among semantic pointers. Semantic pointers do not by themselves explain consciousness, because there are countless neural representations being formed all the time, most of which do not break through into consciousness.” and
    “Many cognitive scientists have maintained that attention functions by means of competition among representations”

    These semantic pointers sound like attention schema. They are representations of activities the brain is monitoring and they compete with other attention threads for dominance.

    “Qualitative experiences result from the competition won by semantic pointers that unpack into neural representations of sensory, motor, emotional, and verbal activity.”

    The unpacking of the successful semantic pointers (or attention schema?) sounds like an awareness schema that forms the contents of consciousness.

    So isn’t SPC essentially the same as Graziano’s Awareness Schema theory?
    SPC adds a lot of meat to the theory, though.

    SPC claims: ” in general qualitative experience is an emergent property of the three mechanisms that operate in organisms capable of neural representation, binding,and semantic pointer competition.”

    Graziano, unlike SPC, doesn’t claim the theory results in consciousness.

  3. SPC seems like a good candidate for actual implementation, even though they seem bent on excluding it. I’ve been thinking recently about the possibility for crowd-sourcing an intelligent system into existence, as for instance Google uses web links and user interaction to create its search network. Competition in particular seems to fit exactly with mass statistical training approach. It would be pretty appropriate if the first artificial conscious entity was constructed by all of humanity.

    Of course, the appearance of intelligence doesn’t necessarily connote consciousness, much less the capacity to perceive qualia. I for one would be satisfied with a generally intelligent search engine. Google is gradually getting smarter, but so far it only gives weighted average opinion. It’s tempting to think that by just tweaking the grading system, while retaining the immediate utility of the algorithm, it would be possible to create a system that would increase in intelligence. Perhaps this would be equivalent to increasing phi?

    On occasion I type in a real question, wishfully expecting an intentionally intelligent reply. Nothing yet.

  4. Thanks Peter (as always!) for bringing this paper to our attention. You say rightly that

    “On the second question, the real Hard Problem, we don’t really get anywhere; it’s suggested that actual experience is an emergent property of the three processes of consciousness. Maybe it is, but that doesn’t really explain it. I can’t seriously criticise Thagard and Stewart because no-one has really done any better with this; but I don’t see that SPC has a particular edge over IIT in this respect either.”

    If consciousness is an emergent property, it isn’t analogous to other emergent system properties, all of which are detectable from outside, hence straightforwardly physical. This is where I think IIT, although it doesn’t say why integrated information should feel like anything, at least recognizes the deep problem of subjectivity – that experience only exists *for the system*. Oizumi, Tononi et al. in their IIT paper I linked to in the previous thread (Phlegm theory) continually advert to the “intrinsic perspective” of the system as a necessary element in explanations of experience. This suggests consciousness doesn’t emerge as an observable, objective property, whether as side-effect or adaptive trait, but is rather a non-causal entailment of instantiating a representational/informational architecture.

    If Thagard and Stewart can show how their semantic pointer competition hypothesis entails the existence of qualities *for the system only*, then they’ll have made a serious dent in the hard problem. But to do this one has to actually address the characteristics of qualities (qualia) – homogeneousness, non-decomposability, cognitive impenetrability, basicness, ineffability, specificity – and suggest how these characteristics might necessarily fall out of instantiating a representational architecture. For some thoughts on this, see part 5 of The appearance of reality, http://www.naturalism.org/philosophy/consciousness/the-appearance-of-reality#toc-5-the-hard-problem–on-how-consciousness-might-be-entail–KxWIA0- At any rate, I agree with Thagard and Stewart that there’s no reason to think that thermostats are conscious since they don’t instantiate the necessary architecture.

  5. Hi Stephen.
    SPC doesn’t strike me as equivalent to the attention schema theory, but it does seem like it describes how the attention part works. Although I’d agree that the semantic pointers could be the raw material of the schema. It seems like Thagard and Stewart get close to the schema part in their global workspace discussion, but they seem to fall back on emergence for awareness overall.

  6. Tom

    Is it just me ? Why do I fail to see any connection between information and consciousness ?

    Your analysis that consciousness cannot be an emergent property of lower level physical properties is a key point but it needs to be qualified . Consciousness cannot be predicted to emerge from lower level physical properties using mathematical models. That does not of course preclude the possibility that they may do. In my opinion, they almost certainly do as nothing else makes sense.

    Reject the religious tenets of physics fundamentalism and such things become possible : the natural world is not straitjacketed to conform to our expectations.

    Information is to reality as paintings are to their subjects. It doesn’t exist, except in people’s heads. The causal powers of information are the same as the genetics of the painting of a duck : an impossibility

    J

  7. John Davey:

    “Why do I fail to see any connection between information and consciousness?”

    Well, experiences certainly seem to be informative about states of affairs, in that their basic qualitative components are immediately distinguishable (green vs. blue vs. yellow) and inter-related along various dimensions (color, taste, timbre); plus their amalgamation into more complex wholes picks out objects and their relations. So I see a pretty intuitive connection between the contents of consciousness and information about the world, including the body.

    “Consciousness cannot be predicted to emerge from lower level physical properties using mathematical models. That does not of course preclude the possibility that they [conscious experiences] may do. In my opinion, they almost certainly do as nothing else makes sense.”

    We might be able to predict that consciousness comes to exist for the system as a subjective, hence non-physical, property according to the considerations I linked to in my previous comment. Physical properties are all in principle objective – that is, intersubjectively available – but experiences definitely are not. That experiences might exist only subjectively and not objectively can make sense, so long as one accepts that existence isn’t restricted to what’s intersubjectively available – not an easy pill to swallow, admittedly.

    “Reject the religious tenets of physics fundamentalism and such things become possible : the natural world is not straitjacketed to conform to our expectations.”

    Now you’re talking. Physicalism is an entirely natural model of the world we get to via experience. We then try to straitjacket experience (consciousness) into physicalism, supposing that the terms of subjective representation (qualia) should somehow be found in the world they represent (physical objects and their components as specified by physics). But we’re not going to find them there, just as we don’t find concepts and numbers out there in spacetime. Qualities don’t appear to physics, only quantities.

    “Information is to reality as paintings are to their subjects. It doesn’t exist, except in people’s heads.”

    Right, existence isn’t restricted to what’s intersubjectively available, but also includes the subjective representational reality (experience) of what’s used by each of us to model the represented, physical, objective world. Just as some folks are realists about concepts and numbers, we can be realists about experience without demanding it be physical. But this isn’t to endorse substance dualism or anything spooky or supernatural.

  8. Tom: “Physical properties are all in principle objective – that is, intersubjectively available – but experiences definitely are not.”

    The results of this experiment were predicted by the neuronal structure and dynamics of the retinoid model of consciousness:

    https://www.researchgate.net/file.PostFileLoader.html?id=56a6586e7eddd3c3e98b4569&assetKey=AS%3A321832228458501%401453742190419

    This seems to be a case where a conscious experience/hallucination is intersubjectively available. What do you make of it?

  9. For a large-scale model of a brain – Spaun – that uses the semantic pointer architecture see for example this demo: https://www.youtube.com/watch?v=RrxmlbZa7C4 and for the full playlist go to https://www.youtube.com/playlist?list=PLYLu6sY3jnoV2DNi84T5OKqJ0oYTxFrQv

    A counting task in BioSpaun (Spaun with neurons that are closer to biology) see fo example: https://www.youtube.com/watch?v=FoOGqzG8_WU

    Spaun / BioSpaun / Nengo are – to my knowledge – the only large-scale brain models capable of performing mutiple tasks. Not even Facebook or Google systems have that.

  10. Tom Clark,

    We might be able to predict that consciousness comes to exist for the system as a subjective, hence non-physical, property according to the considerations I linked to in my previous comment. Physical properties are all in principle objective – that is, intersubjectively available – but experiences definitely are not.

    Just because we cannot easily “observe” or “transfer” experiences / qualia from person to person doesn’t mean that they are “subjective” or “non-physical”. If we couldn’t experience dark matter would we say that dark matter is “subjective”? If we couldn’t observe subatomic particles would we say that they are not physical? Of course not. We can’t observe other animals’ qualia, because we don’t have the proper instruments, proper interface. If we were to connect a portion of my visual cortex to corresponding portion of your visual cortex, then we would most likely be able to experience similar qualia (although much distorted) – at least, it’s a hypothesis that cannot be ruled out by a priori stating that consciousness is “non-physical”.

  11. You could consider the spinning armature of an electric motor as an objective physical property. The emf in the armature winding is not observable so considered “non-physical” to a Cartesian Dualist. The picture of neuronal interactions is simply incomplete so they consider subjectivity as non-observable and non-physical which looks like an intuitive howler to many scientists and technical observers.

  12. A few replies re the categorical privacy/subjectivity of experience – sorry about this detour!

    Arnold:

    “This seems to be a case where a conscious experience/hallucination is intersubjectively available. What do you make of it?”

    “Independent observers looking over the subject?s shoulder have the
    same hallucination of the changing triangle as the subject changes the shape of
    the hallucination to maintain base-height equality.”

    Those who aren’t the subject don’t have the subject’s experience, although they undergo the same hallucination/illusion. They each have their own *experience* of the illusion.

    Ihtio:

    “If we were to connect a portion of my visual cortex to corresponding portion of your visual cortex, then we would most likely be able to experience similar qualia (although much distorted).”

    Again, there would be two distinct and unshared experiences happening, even though the content of the experiences would presumably be similar in some respects. There’s no way to “look” at each experience, compare them, and thus know the extent of the similarity. Experiences are never observed, not even by the subject that undergoes them.

    VicP:

    “You could consider the spinning armature of an electric motor as an objective physical property. The emf in the armature winding is not observable so considered “non-physical” to a Cartesian Dualist.”

    The emf (electro-magnetic field) is observable (detectable) by means of various emf detectors, so is a perfectly physical, objective property. What isn’t observable or detectable is my pain, only its physical correlates, whether neural or behavioral. Were pain and other experiences observable in the way that objective physical phenomena like emf are, the problem of other minds wouldn’t arise. We would know with certainty whether fish feel pain, a matter of considerable debate among ethicists. Claims that other creatures and systems are conscious are made on the basis of their behavioral, biological, and (more recently) computational/representational similarity to us.

    More on privacy at http://www.naturalism.org/philosophy/consciousness/respecting-privacy

  13. Tom: “Those who aren’t the subject don’t have the subject’s experience, although they undergo the same hallucination/illusion. They each have their own *experience* of the illusion.”

    Of course each person has his/her experience of the hallucinated object. But if the independent observers have the same hallucination that the subject self-induces, why can’t we say that the experience of the hallucination is inter-subjectively available?

  14. Tom: How do you know the armature does not feel pain or pleasure etc.?

    We assume it doesn’t and we know all armatures are the same just like all biology (especially other humans) is structured similar to ours. So there are no non starters here, just the gap of not understanding inner neuron function.

    Also, just as all armatures emit detectable emf, the 40 hz is also present in the cns.

  15. Arnold:

    “Of course each person has his/her experience of the hallucinated object. But if the independent observers have the same hallucination that the subject self-induces, why can’t we say that the experience of the hallucination is inter-subjectively available?”

    It’s because there isn’t one experience that is shared, but rather multiple experiences, each of which have the same reported content: seeing the (illusory) triangle. It’s this single illusion, produced by the experimental set-up, that is intersubjectively available to the group, not a single experience of the illusion. Likewise, the (real) room in which the SMTT experiment is carried out is intersubjectively available (everyone sees it and describes it in more or less the same way), while each person’s experience of the room is available only to themselves. That the triangle is illusory doesn’t make one person’s *experience* of the illusion any more intersubjective than her experience of the room.

  16. Tom,

    The real room in which the SMTT experiment is conducted is intersubjectively available because its sensory features are publicly available. This is not the case with the SMTT hallucinated object which has no sensory features, only the subject’s phenomenal features that are self-induced by a single individual and shared by others.

  17. Arnold:

    “The real room in which the SMTT experiment is conducted is intersubjectively available because its sensory features are publicly available. This is not the case with the SMTT hallucinated object which has no sensory features, only the subject’s phenomenal features that are self-induced by a single individual and shared by others.”

    The subject has her own individual experience of the illusion as produced by the intersubjectively available experimental apparatus, and each of the others likewise have their own individual experience as thus produced. Each person’s experience, as constituted by its phenomenal features, is strictly their own; it’s subjective, not intersubjective.

    In his talks on consciousness, Dennett likes to induce the experience of an afterimage of an American flag in his audience and then ask where the afterimage is. Each member of the audience experiences an afterimage, so although the content of these experiences is more or less the same, the experiences themselves are numerically distinct and unshared – one to a customer.

  18. Tom

    “experiences certainly seem to be informative about states of affairs”

    .. information can be extracted from feelings and experiences – particularly given their links to other knowledge – but i don’t see that in isolation they’re even remotely informative.

    In what sense is a green colour experience informative ? Informative about what exactly ? If I see green – with eyes closed – I see green. End of. But if I look on a field and I see green I can link that experience to my knowledge and and think “this is a field of grass and it is well watered”. But the experience itself is just an experience.

    I’ve never really seen any sense in treating an isolated sensory experience as being anything other than just an isolated experience. As babies grow the brain – rather naturally – links conscious experiences to information about the world – but there is no evidence that we we are born with that ability – quite the contrary, we have to learn itand to organise all this stuff through infancy.

    Qualia are characteristics of consciousness. That information can be about qualia, rather then contained within qualia seems obvious. BUt consciousness itself – the feeling of being able to feel – this in particular strikes me as being a most peculiarly uninformative state. It is what it is, and its difficult to see what information could be about it, other than information linked to the feeling of consciousness.

    “That experiences might exist only subjectively and not objectively”

    I’ve never bought this distinction and I’ve never seen the need for it. It seems to me to be a category error. Experiences are experienced subjectively but that doesn’t preclude them from existing objectively. I can talk of your consciousness/my consciousness whatever, with no ambiguity.

    Consciousness can exist objectively, naturally and immaterially as a first-person experience phenomena. Science it seems to me, will sooner of later be able to predict it’s existence based upon material factors. But physics – using a sementically closed portfolio of space,mass and time, will never do so.

    “Now you’re talking. Physicalism is an entirely natural model of the world we get to via experience. We then try to straitjacket experience (consciousness) into physicalism, supposing that the terms of subjective representation (qualia) should somehow be found in the world they represent (physical objects and their components as specified by physics). But we’re not going to find them there, just as we don’t find concepts and numbers out there in spacetime. Qualities don’t appear to physics, only quantities.”

    Agree, 100%..

    “Right, existence isn’t restricted to what’s intersubjectively available, but also includes the subjective representational reality (experience) of what’s used by each of us to model the represented, physical, objective world. ”

    More modelling.. I think this language can be a mistake. It’s sometimes akin to the head-inside-the-head thing. It’s the old view that we don’t see something, we see an image of something.

    When we analyze a problem – as conscious beings – we might draft a model of it on a piece of paper, or make a painting. That’s where this idea originates that that is what the brain does too. But that of course implies there is a modelling conscious agent inside your head besides the other conscious agent.

    The brain just does what it does. It generates mental phenomena. The mental phenomena may or may not relate to objective world , but let’s not get too carries away with the idea that it’s a representation of it.

    J

  19. Tom: “The subject has her own individual experience of the illusion as produced by the intersubjectively available experimental apparatus, and each of the others likewise have their own individual experience as thus produced. Each person’s experience, as constituted by its phenomenal features, is strictly their own; it’s subjective, not intersubjective.”

    1. The experience is a hallucination, not an illusion.

    2. The hallucination of a laterally oscillating triangle with its base equal to its height is produced by the subject by controlling an invisible stimulus. So it seems that the subject’s conscious experience is intersubjectively available to others exposed to the invisible stimulus with its objective features that do not correspond to the induced hallucination.

    3. Why can’t we say that the hallucination is both subjective and shared intersubjectively among N observers?

    4. How would you define an intersubjective event?

  20. John: “The brain just does what it does. It generates mental phenomena. [1] The mental phenomena may or may not relate to objective world , but let’s not get too carries away with the idea that it’s a representation of it. [2]”

    1. But science wants to know the biological structure and dynamics of brain mechanisms that can generate relevant mental phenomena (e.g., the retinoid system).

    2. The fact that we survive in a what we take to be what the objective world is like, suggests that that our brain representation of the world captures at least some its significant properties. Of course, many of our representations of the real world can also be wrong (e.g., the moon illusion).

  21. John:

    “.. information can be extracted from feelings and experiences – particularly given their links to other knowledge – but I don’t see that in isolation they’re even remotely informative. In what sense is a green colour experience informative?”

    Agreed. Single qualia in isolation don’t mean a thing. They have to be in context, and usually are.

    “Experiences are experienced subjectively but that doesn’t preclude them from existing objectively. I can talk of your consciousness/my consciousness whatever, with no ambiguity. Consciousness can exist objectively, naturally and immaterially as a first-person experience phenomena.”

    I agree that experiences exist in the natural world, so are as real as the brain states they correlate with so are objective in that sense, but they aren’t objective in being intersubjective, observable, physical existents in the way brains are. If they were, there wouldn’t be any problem of consciousness.

    “The brain just does what it does. It generates mental phenomena. The mental phenomena may or may not relate to objective world , but let’s not get too carried away with the idea that it’s a representation of it.”

    Well, we know the brain exists to model and predict events in service to survival, and we know experience correlates very closely with certain brain processes which are essential for learning, memory, and novel and complex behavior at our level. So it isn’t unreasonable to think that experience is more or less a subjective representation of the world that runs in parallel with certain behavior guiding, representational brain processes.

    In dreams we experience the model independently of the world, and in lucid dreams we get to actually appreciate the fact that experience, somehow a function of what the brain is doing, constitutes the world for us as subjects, whether waking or dreaming.

    http://www.naturalism.org/philosophy/consciousness/experience-as-a-virtual-reality

  22. Arnold:

    “… it seems that the subject’s conscious experience is intersubjectively available to others exposed to the invisible stimulus with its objective features that do not correspond to the induced hallucination.

    “3. Why can’t we say that the hallucination is both subjective and shared intersubjectively among N observers?”

    What’s intersubjectively available is the experimental apparatus, which when viewed by N observers induces N hallucinations (N experiences), each undergone by a single individual, hence subjective. Experiences aren’t the sort of things that can be seen or otherwise observed, so aren’t intersubjectively available.

    “4. How would you define an intersubjective event?”

    What different observers can see, such as the operation of the experimental apparatus in the SMTT experiment.

  23. Tom: “‘4. How would you define an intersubjective event?”’

    ‘What different observers can see, such as the operation of the experimental apparatus in the SMTT experiment.”‘

    That is just the point. All that the observers can see is the hallucination of the subject. The experimental stimulus is not seen — it is invisible to all. Strange but true.

  24. Arnold:

    “All that the observers can see is the hallucination of the subject.”

    The subject’s hallucination is an experience that the subject has as induced by seeing the experimental apparatus. The apparatus is something that all suitably placed observers can see, but experiences can’t be seen or observed. So it isn’t the case that the observers can see the hallucination of the subject.

    I can’t see or otherwise observe or have your pain, nor you mine, and the same goes for hallucinations, should either of us have one. I’ll stop here, since I’ve belabored the point about the privacy of experience long enough.

  25. “Agreed. Single qualia in isolation don’t mean a thing. They have to be in context, and usually are.”

    .. but the question is what provides the context ? Is it consciousness, or higher level brain functions that are organising this conscious input into something that could be described as information ? I think the latter, otherwise we’d be born with these capacities. It’s known that mature (ie post-infancy) brains impose structure on sense input with familiar effects in e.g pyschoaccoustics.

    ” but they aren’t objective in being intersubjective, observable, physical existents in the way brains are.”

    they aren’t material I agree .. but not observable ? Isn’t this simply an overly tight definition of a standard of observability ?

    They are not observable in the same sense and standard that atoms are not observable. But given a theoretical framework atoms become ‘observable’ – ie we can conduct experiments and tests which produce numbers, assuming atomic theory to be true. Given a theoretical framework relating mental phenomena to matter (which would be adhoc and not in the manner of standard physics) then mental phenomena too would be observable. To anaesthetists and medical professionals of course, mental phenomena are fully observable now, otherwise they wouldn’t be able to work. The first person character of subjective experience cannot preclude the use of physical metrics to measure it, however vague it may be deemed to be.

    “If they were, there wouldn’t be any problem of consciousness.”..

    .. which is my point. There is no problem of consciousness – there is a problem of culture. The focus of science upon the notion of the material has precluded – in the western world at least – any serious treatment of the mental. This has been exacerbated by what can only be described as 300 years of propaganda about physics, propaganda that its main founder, Newton, would never have agreed with.

    “So it isn’t unreasonable to think that experience is more or less a subjective representation of the world”

    A representation is “the description or portrayal of someone or something in a particular way.” (google dictionary). Taken from a 3rd party perspective, somebody else’s brain could be viewed as containing a representation or the reality in ourbrain. But from a 1st person perspective it makes no sense. In order to represent something, you have to know what that “something” is in the first place – otherwise you can’t represent it.

    But that prior knowledge is only possible from mental activity. So if what the brain does is ‘representation’ there would have to be an initial process of mental realisation of reality then a subsequent process of ‘representing’ it. This is clearly nonsense : the head inside the head.

    It’s perhaps easier if you think about seeing things. Seeing things involves the creation of mental images. Creation is seeing. But for a long time of course there was a belief that visual image creation was followed by seeing. A head inside the head again, and all done to try to accomodate the notion of hallucination. Hallucination is of course, seeing – you just see things that aren’t there.

    And in the same vein a brain doesn’t “represent” reality. It responds to the external world by constructing a mental framework from it which may or may not have a correspondence to it.

    I’m not sure I agree with this as I don’t know what ‘observable’ actually means.

  26. In Empiricism and Philosophy of Mind, Sellars distinguishes non-epistemic and epistemic “experiencings”. The former he calls an “undergoing” – uninterpreted neural activity consequent to sensory stimulation (see note below). For Sellars, the latter is an epistemic achievement resulting from a learning process which results in the ability of multiple observers to “describe” (ie, emit utterances in response to) private – but presumably similar – undergoings using essentially the same vocabulary. Eg, in “I see a red triangle”, “see” is an epistemic achievement in the sense that the observer’s “undergoing” consequent to certain sensory stimulation has become associated with the phrase “red triangle” via participation in a linguistic/epistemic community.

    The disconnect between Tom and Arnold seems to result from use of a vocabulary that fails to make that distinction clear. Using the above terminology, the observers in the SMTT experiment have different “undergoings” (or if you prefer, “non-epistemic experiencings”). But because they are members of the same epistemic/linguistic community, they “see” the same thing; ie, they use the same words to describe their individual undergoings. In their exchange, Tom and Arnold mostly use “intersubjective” in the perspectival sense of multiple observers receiving sensory stimulation from a common source (a sense that is adequately captured by simply saying that the source is “public”). However, in comment 15, Tom says:

    the … room in which the SMTT experiment is carried out is intersubjectively available (everyone sees it and describes it in more or less the same way), while each person’s experience of the room is available only to themselves.

    This appears to conflate the perspectival and linguistic/epistemic senses of “intersubjective”. I think that simply rephrasing that quote as follows accurately captures the SMTT scenario while avoiding the ambiguity between the two senses of “intersubjectivity”:

    Because in the SMTT experiment the source of sensory stimulation is public and the observers are members of the same linguistic/epistemic community, the observers will tend to respond to the common sensory stimulation with similar utterances notwithstanding that their individual undergoings are private.

    This way of expressing the situation makes clear that each observer has a private non-epistemic experiencing but a shared epistemic experiencing. This suggests to me that the use of “intersubjectivity” could be profitably excluded from the discussion.

    I think “subjective” and “objective” are also problematic in this context. The 3pp isn’t “objective” in some sense of better capturing reality, it just benefits from the redundant descriptions of undergoings that may be available when the source of sensory stimulation is public and the observers are members of the same linguistic/epistemic community. And although an undergoing is “subjective” in that no one can “undergo” another’s neural activity in response to sensory stimulation even if its source is public, that fact seems adequately captured by simply calling undergoings “private”. Since “objective” seems misleading and “subjective” is unnecessary (and baggage-laden), perhaps they also need to be retired in this context.

    Note: What I take Tom to mean by “subjective experience” and Arnold to mean by phenomenal experience 1.

  27. Charles: “Because in the SMTT experiment the source of sensory stimulation is public and the observers are members of the same linguistic/epistemic community, the observers will tend to respond to the common sensory stimulation with similar utterances notwithstanding that their individual undergoings are private.”

    What is different about the SMTT example is the fact that there is no sensory stimulus that provides an epistemic basis for the observers descriptions. The subject self-induces a hallucination of a triangle with its base equal to its height, and others looking at the screen with no apparent visual stimulus, experience the same hallucination.

  28. John in 25:

    “And in the same vein a brain doesn’t ‘represent’ reality. It responds to the external world by constructing a mental framework from it which may or may not have a correspondence to it.”

    But correspondence is at least an essential element of representation. To get back to the OP, Thagard and Stewart lean heavily on the concept of representation and meta-representation throughout their paper. To wit:

    “H2. The crucial mechanisms for consciousness are: representation by patterns of firing in neural populations, binding of these representations into semantic pointers, and competition among semantic pointers…Hypotheses H2 breaks down into three claims about neural representation, semantic pointers, and competition. The first of these is relatively uncontroversial. Neural populations represent the world because neurons that interact with the world and each other become tuned to regularities in the environment (e. g. Dayan & Abbot, 2001; Eliasmith & Anderson, 2003; O’Reilly & Munakata, 2000).”

    So on their view representation is a correspondence or “tuning” of neural populations to regularities in the world. As they say, this doesn’t seem a terribly controversial idea.

  29. Tom: “So on their view representation is a correspondence or “tuning” of neural populations to regularities in the world. As they say, this doesn’t seem a terribly controversial idea.”

    It should be controversial because the world must first be represented as a volumetric surround from an egocentric perspective before the neuronal “tuning” of its regularities can promote our kind of adaptation to events in the world.

  30. Arnold:

    [In]the SMTT example … there is no sensory stimulus that provides an epistemic basis for the observers descriptions.

    Well, I hoped to avoid it, but since you won’t let me get away with any short cuts, here’s the rest of Sellars’ story.

    For him, “epistemic” comes with a confidence measure which allows both “seeing” an object that is there and “seeming to see” one that may or may not be there. In both cases the visual stimulus is the same, but there may be a greater or lesser degree of confidence in those responses depending on the context, including the observer’s insight into the experimental setup.

    For example, a subject who knows that the light in the room is essentially white and who can move so as to see the target object from different positions may confidently assert “I see a red ball”. The same subject without those insights into the setup may say “I seem to see a red ball, but it may well be a white disc illuminated by red light”. Both assertions are epistemic but express different degrees of confidence in the nature of the object.

    In discussing the SMTT, you distinguish what I assume to be a naive subject and sophisticated observers, the former lacking, the latter having, such insights into the experimental setup. In which case the subject and the observers are not members of the same epistemic community (at least with respect to the experiment). Then the subject may confidently – but incorrectly – say “I see a triangle” while the observers – who know there is no triangle to be seen – will say “I seem to see a triangle”. In that sense, the stimulus can be said to be “epistemic”.

  31. Charles,

    As a sophisticated observer with knowledge of SMTT and the retinoid model, I say “I hallucinate a triangle and the subject hallucinates a triangle”. Unsophisticated subjects and observers don’t realize that their vivid visual experience is a hallucination and confidently say “I see a triangle”.

  32. I think that’s essentially what I said. But in any event, your replies seem to ignore my point which is that your exchange with Tom appears to involve an apples and oranges comparison. He says – correctly (because trivially true) that each observer (whether subject or not) has a private non-epistemic experiencing (PE1?) that may or may not be unique. And each has an epistemic experience which will result in similar public “descriptions” by members of the same epistemic community (with respect to the experiment). So, my questions are:

    1. Does anyone disagree with my way of expressing the experiment?

    2. If not, is anything added by expressing this using instead “objective”, etc, words which seem to me to confuse rather than clarify the discussion?

    It would be interesting to know how children at different stages of linguistic development respond to the experiment. Specifically, how would children with limited vocabulary “describe” their non-epistemic experience? They obviously can’t use “triangle” if it isn’t in their vocabulary, but do they “see” a figure comprising three straight lines forming a closed figure? Ie, does one have to have the concept of a triangle in order to “hallucinate” one?

  33. Charles,

    My experience is not one of *seeming* to see a triangle in the SMTT experiment. My experience is one of actually seeing a triangle. But I also know that there is no triangle to be seen, so I realize that my experience is a vivid hallucination of a triangle. It is the latter conclusion that is epistemic. But it follows my initial conscious experience/hallucination. As for the use of “objective”, I agree that it is problematic. I think it is better to say that the event is a public event observable by more than a single person.

  34. Charles,

    I don’t think that one needs a concept of a triangle in order to hallucinate one. This might be tested by having pre-linguistic children undergo SMTT and then observing their responses to a drawn triangle among other shapes.

  35. In Sellars’ vocabulary, a viewer who is sophisticated but doesn’t have your insight into the experiment would hedge by saying “I seem to be seeing a triangle, but I may not be”. Having the additional information that there’s no triangle there, you don’t have to hedge and can say “despite hallucinating a triangle, I know none is there”. But in Sellars’ vocabulary, neither can say “I see a triangle there” because your epistemic state includes knowledge that there is none and the other sophisticated viewer can’t have the confidence suggested by “see”. OTOH, a naive viewer may confidently say “I see a triangle”, but only because of failure to recognize the possibility of being wrong.

    Altho I failed to say it explicitly, my original point was that I think you and Tom are both right but using language that obscurs that fact. The non-epistemic experience is private and the epistemic experience is in a strong sense “public” because in Sellars’ concept of knowledge, “knowing that P” is the result of social practice, ie experiences that have a large degree of commonality and agreement on how they are to be described.

  36. Charles ,

    Would you agree that the non-epistemic experience of all participants in the SMTT experiment is a shared conscious experience because all are experiencing the same hallucination that is induced by the subject?

  37. Arnold,

    The following statement, taken directly from the question you posed to Charles:

    the non-epistemic experience of all participants in the SMTT experiment is a shared conscious experience because all are experiencing the same hallucination that is induced by the subject

    can be expressed as: All participants experience X, therefore X is a shared conscious experience. This however would mean that we share extremely many conscious experiences, for example in cinema or in theater, where we are all afraid or excited.

    Or I may be swayed by “[experience] is induced by [a] subject”. Would you care to clarify? And how would one differentiate experience induced by a subject vs one that is not induced by a subject?

  38. Arnold @38:

    I’ll try to express my understanding in RS terms. I see the similar, but not identical, private non-epistemic experiencing as being essentially PE1, positioned in the surround with respect to I! but before identifying any learned linguistic associations. And I see the subsequent (also similar, but not identical) private epistemic experiencing – the association of PE1 with words as the product of a public social practice: Sellars’ placing of one’s assertions in the “space of reasons” with the goal of justifying them to an epistemic community.

    Here’s my guess as to why all observers (with some threshold level of linguistic sophistication) have the same epistemic experience (ie, “hallucination” of a triangle), a guess based on the heuristic model I use for how neural activity patterns are associated with words. I assume that there are “learned” neural structures that respond to associated patterns of stimulation something like communication matched filters respond to signals: maximum response to the target signal and progressively weaker response as deviation of the actual signal from the target signal increases. In term of this model, the visual sensory stimulus in the SMTT presumably is sufficiently “close” (in some sense) to the target stimulus for a neural “matched filter” that when triggered responds by effecting the utterance “triangle”. It is only in some sense related to this view that I would call the hallucination that is common to all observers “shared”. And it isn’t obvious to me what benefits would accrue to doing so.

  39. Ihtio: “Would you care to clarify? And how would one differentiate experience induced by a subject vs one that is not induced by a subject?”

    In the case of a movie there are publicly accessible images that are perceived by all in the theater which induce all kinds of subjective experiences. In the case of the SMTT triangle there is no publicly accessible image of a triangle to be perceived. All that the independent observer directly experiences is the hallucination that is self-induced by the subject. There is no intermediation of perception.

  40. Charles,

    In the SMTT experiment there is *no sensory stimulus that might excite a matched filter*. There is only the vivid phenomenal experience of a horizontally oscillating triangle with a base equal to its height. When you think about it, it is an extraordinary finding.

  41. Arnold,

    In the case of a movie there are publicly accessible images that are perceived by all in the theater which induce all kinds of subjective experiences. In the case of the SMTT triangle there is no publicly accessible image of a triangle to be perceived. All that the independent observer directly experiences is the hallucination that is self-induced by the subject. There is no intermediation of perception.

    I wasn’t talking about publicly accessible visual scene or images. I was talking about emotions, such as joy, excitement, fear, that are not publicly accessibly, but experiences by all the viewers.
    There are no emotions “out there”, but viewers experience them.
    There is not triangle “out there”, but subjects experience it.

    Your position leads to the conclusion that many experiences are publicly accessible or shared.

  42. I don’t doubt that there are many experiences that are publicly accessible and shared. Art, literature, and science would not exist otherwise. But we are concerned here with sharing what we take to be the private conscious experience of a single individual.

  43. In the SMTT experiment there is *no sensory stimulus that might excite a matched filter*.”

    I have no idea how to interpret this. Presumably, the reason the SMTT is of interest is precisely that observers have the phenomenal experience of a triangle despite there being no “triangular” source of sensory stimulation. But there obviously IS a source of sensory stimulation, viz, the moving dot. Presumably, the question is how that clearly non-triangular sensory stimulus causes the phenomenal experience of a triangle.

    Of course, I don’t know the answer and was just explaining the heuristic I use in thinking about how the learned association of a sensory stimulus and a linguistic token might work. Matched filters can respond to signals that to the casual observer might not look much like the target signal, so their behavior seems somewhat analogous to the SMTT experiment’s results if one thinks of the moving dot as a corrupted input signal to a neural structure intended to detect a “triangular input signal”. And once the brain has made the association of a sensory stimulus and the word “triangle”, it is easily imaginable that a corresponding phenomenal experience could result.

  44. Charles: “Presumably, the question is how that clearly non-triangular sensory stimulus causes the phenomenal experience of a triangle.”

    Yes, the vertically oscillating dot is the sensory stimulus that induces the hallucination of a horizontally oscillating triangle. But observers are not aware of the dot or the slit in which it is exposed; their only conscious experience is of the triangle. The theoretical explanation is that the “invisible” stimulus of the vertically oscillating dot induces the construction of a complete triangle in *horizontal* motion by the operational properties of the retinoid mechanisms. It is this *endogenously constructed* egocentric excitation pattern that is processed by our matched filters — filter cells — in the synaptic matrices and then in the semantic networks to give the response “I see a triangle”. This part is in accordance with what you suggest. But what is of theoretical interest regarding consciousness is the role of the putative retinoid system as the source of conscious experience.

  45. Arnold:

    I understand and agree with everything in that comment. The point of my original comment was simply that once one knows enough to write a paragraph like that, I don’t see what is gained by continuing to use problematic terms like “objectivity”, “subjectivity”, etc. The issue of private-public, objective-subjective, 1pp-3pp has infiltrated this blog for years with seemingly no resolution. Maybe it’s time to build a wall!

    Of course, in trying to rephrase Tom’s quote, I had to use some of those terms. But that’s because I don’t understand the details of the RS well enough to write a comment like yours.

  46. Charles, Yes, for what it is worth, I agree with you. There are so many people that can only think in these terms, such as either non-conscious or conscious, black or white, they are incapable of spectrum thought, which of course contains at least 50 shades of grey. This applies to 1pp and 3rdpp which I have previously suggested might be joined by a variation which includes 2ndpp.

  47. Tom

    Representation is a two phase process. First comes cognition of that to be represented : then comes the transference of features from cognition to the medium of expression.

    The brain does not have two phases , only one. It makes as much sense to say the brain represents the world as to say that the body of water in a river represents the topography of the river bed it throws through. From a 3rd person perspective it has some use as an expression – the water is the same shape as the river bed – but representation is an intentional act that neither a river does not do – nor a brain in normal low level sense activity.

    J

  48. John, here’s a passage from Churchland and Sejnowski, “Neural Representation and Neural Computation” (url below) that talks about representation by the brain in the way I’m getting at, which again seems to me uncontroversial:

    “A theory of how states in a nervous system represent or model the world will need to be set in the context of the evolution and development of nervous systems, and will try to explain the interactive role of neural states in the ongoing neurocognitive economy of the system. Nervous systems do not represent
    all aspects of the physical environment; they selectively represent information a species needs, given its environmental niche and its way of life. Nervous systems are programmed to respond to certain selected features, and within limits they learn other features through experience by encountering examples and generalizing. Cognitive neuroscience is now beginning to understand how this is done (Livingstone 1988; Goldman-Rakic 1988; Kelso, Ganong, and Brown 1986). Although the task is difficult, it now seems reasonable to assume that the “aboutness” or “meaningfulness” of representational states is not a spooky relation but a neurobiological relation. As we come to understand more about the dynamical properties of networks, we may ultimately be able to generate a theory of how human language is learned and represented by our sort of nervous system, and thence to explain language-dependent kinds of meaning.”

    There’s a whole raft of papers on “Representation in Neuroscience” compiled by Chalmers at http://consc.net/mindpapers/7.2b

    Here’s Churchland and Sejnowski: http://papers.cnl.salk.edu/PDFs/Neural%20Representation%20and%20Neural%20Computation%201990-3325.pdf

  49. Tom

    My question is about the use of this word “representation” in the context of mental activity. I don’t think it’s a very good word. In high level mental activity – thinking – representation makes sense. But it makes no sense for everything else.

    For a start, mental imagery of the external is not representation. It’s a neural product. There are no colours in the physical world,no smells, and nothing like them in the physical world so it makes no sense to view a red colour experience as a ‘representation’ of EM waves of x nanometres, as there is no comparator to ‘represent’ in this way in the first place. It’s the same with smells and noises. There is no sound related to a wave of gas, so noise is not a representation, it is a product.

    There are other ambiguities. There is no way of knowing if there is an independent shape of spatial extension in the “real” world that means our mental imagery of space has a meaningful correspondent, or if it too is an imposition or product of some kind. However as we cannot think outside of a time context or a space context this is difficult. It is difficult to think of how visual imagery of space might not correspond to how space actually is. Exactly the same considerations may be made of time. The mental image or feeling of time may or may not correspond to how time actually is, so could it be said to be a representation ?

    The problem is all out theories of the external world grow out of our sensual experience of it. There is a crucial interdependence between conscious experience and the notion of an objective, independent universe and what it might “look like”. I’m not convinced that the use of physics can guarantee that this dependence can ever disappear and we can conceive of an external world truly outside of human cognitive limits.

    So I suppose I’m not keen on the use of the word “representation” apropos mental activity because you have to know what it is you are representing before you can represent it. It may seem nitpicking but there are lots of occurrences of head-inside-the-head arguments that flow from the basic error of treating these terms too literally

    JBD

  50. Tom

    re : Churchland et al

    There is not much point in pointing me in the direction of this kind of work as I’ve already read my fill of it. It’s what Karl Popper would call suspect science at best, and fraudulent pseudo-science at worst. Very little of it flows from natural science : most of it flows on (usually) well-logical and detailed grounds – alas from the flakiest conjectures, almost never citing any findings from biologists or neuroscientists, arguing always from structure about what “must” be true.

    “Churchland and Sejnowski: http://papers.cnl.salk.edu/PDFs/Neural%20Representation%20and%20Neural%20Computation%201990-3325.pdf

    P352 – “cognition essentially involves representations and computations”..

    Does it ? Of course it does ! The paper says so. It MUST be true. Nice work for a “scientist” if he doesn’t have to do any science : he can just state the facts as he sees them and not bother with research. Let’s get rid of the men in white coats and just start peeling off the axioms. Just like marxist economists in Popper’s day, or Darwinian race biologists, or Indian fortune tellers.

    let’s avoid maths, chemistry, biology or anything that actually produces definitive predictions, or statements capable of being falsified, and let’s just live in a world of axioms and self-referential definitions. That way we’ll truly get nowhere, but there’s a hefty academic bandwagon with a large supply of taxpayer’s money we can latch onto.

    JBD

    Amazing.Nice

  51. john: “The problem is all our theories of the external world grow out of our sensual experience of it.”

    I disagree. We have no sensory experience of the volumetric space we live in because we have no sensory apparatus for detecting this world space. It is the innate brain mechanism of consciousness that gives us our experience of being at the perspectival origin of a surrounding world. This is what all our theories of the external world grow out of. See “Evolutions Gift: Subjectivity and the Phenomenal World” on my Research Gate page.

Leave a Reply

Your email address will not be published. Required fields are marked *