Picture: meter. It has been reported in various places recently that Giulio Tononi is developing a consciousness meter.  I think this all stems from a New York Times article by the excellent Carl Zimmer where, to be tediously accurate, Tononi said “The theory has to be developed a bit more before I worry about what’s the best consciousness meter you could develop.”   Wired discussed the ethical implications of such a meter, suggesting it could be problematic for those who espouse euthanasia but reject abortion.

I think a casual reader could be forgiven for dismissing this talk of a consciousness meter. Over the last few years there have been regular reports of scientific mind-reading: usually what it amounts to is that the subject has been asked to think of x while undergoing a scan; then having recorded the characteristic pattern of activity the researchers have been able to spot from scans with passable accuracy the cases where the subject is thinking of x rather than y or z.  In all cases the ability to spot thoughts about x are confined to a single individual on a single occasion, with no suggestion that the researchers could identify thoughts of x in anyone else, or even in the same individual a day later. This is still a notable achievement; it resembles (can’t remember who originally said this) working out what’s going on in town by observing the pattern of lights from an orbiting spaceship; but it falls a long way short of mind-reading.

But in Tononi’s case we’re dealing with something far more sophisticated.  We discussed a few months ago Tononi’s Integrated Information Theory (IIT), which holds that consciousness is a graduated phenomenon which corresponds to Phi: the quantity of information integrated. If true, the theory would provide a reasonable basis for assessing levels of consciousness, and might indeed conceivably lead to something that could be called a consciousness meter; although it seems likely that measuring the level of integration of information would provide a good rule-of-thumb measure of consciousness even if in fact that wasn’t what constituted consciousness. There are some reasons to be doubtful about Tononi’s theory: wouldn’t contemplating a very complex object lead to a lot of integration of information? Would that mean you were more conscious? Is someone gazing at the ceiling of the Sistine Chapel necessarily more conscious than someone in a whitewashed cell?

Tononi has in fact gone much further than this: in a paper with David Balduzzi he suggested the notion of qualia space. The idea here is that unique patterns of neuronal activation define unique subjective experiences.  There is some sophisticated maths going on here to define qualia space, far beyond my clear comprehension; yet I feel confident that it’s all misguided.  In the first place, qualia are not patterns of neuronal activation; the word was defined precisely to identify those aspects of experience which are over and above simple physics;  the defining text of Mary the colour scientist is meant to tell us that whatever qualia are, they are not information. You may want to reject that view; you may want to say that in the end qualia are just aspects of neuron firing; but you can’t have that conclusion as an assumption. To take it as such is like writing an alchemical text which begins: “OK, so this lead is gold; now here are some really neat ways to shape it up into ingots”.

And alas, that’s not all. The idea of qualia space, if I’ve understood it correctly, rests on the idea that subjective experience can be reduced to combinations of activation along a number of different axes.  We know that colour can be reduced to the combination of three independent values (though experienced colour is of course a large can of worms which I will not open here) ; maybe experience as a whole just needs more scales of value.  Well, probably not.  Many people have tried to reduce the scope of human thought to an orderly categorisation: encyclopediae;  Dewey’s decimal index; and the international customs tariff to name but three; and it never works without capacious ‘other’ categories.  I mean, read Borges, dude:

I have registered the arbitrarities of Wilkins, of the unknown (or false) Chinese encyclopaedia writer and of the Bibliographic Institute of Brussels; it is clear that there is no classification of the Universe not being arbitrary and full of conjectures. The reason for this is very simple: we do not know what thing the universe is. “The world – David Hume writes – is perhaps the rudimentary sketch of a childish god, who left it half done, ashamed by his deficient work; it is created by a subordinate god, at whom the superior gods laugh; it is the confused production of a decrepit and retiring divinity, who has already died” (‘Dialogues Concerning Natural Religion’, V. 1779). We are allowed to go further; we can suspect that there is no universe in the organic, unifying sense of this ambitious term. If there is a universe, its aim is not conjectured yet; we have not yet conjectured the words, the definitions, the etymologies, the synonyms, from the secret dictionary of God.

The metaphor of ‘x-space’ is only useful where you can guarantee that the interesting features of x are exhausted and exemplified by linear relationships; and that’s not the case with experience.  Think of a large digital TV screen: we can easily define a space of all possible pictures by simply mapping out all possible values of each pixel. Does that exhaust television? Does it even tell us anything useful about the relationship of one picture to another? Does the set of frames from Coronation Street describe an intelligible trajectory through screen space? I may be missing the point, but it seems to me it’s not that simple.

29 Comments

  1. 1. Charles Wolverton says:

    Peter’s reference to Mary in the context of IIT suggests some possibly interesting avenues to pursue.

    First, a critical distinction between “information” as used in normal conversation and its technical sense: the former includes semantics, the latter doesn’t. In its technical sense (as I use it henceforth), information transfer involves only resolution of uncertainty between two would-be communicators as to which of a set of possible symbols has been sent. For example, suppose I want to convey to you the outcome of a fair coin toss by raising either my right or my left hand; ie, we agree on a two symbol “alphabet” {L,R}. Before the toss there will be uncertainty whether I will “send” L or R. Afterward, that uncertainty will be resolved (assuming error-free communication – you can unequivocally determine which hand I raise). Having resolved which symbol I have “sent”, you now have “received” one bit of information. But note that you still don’t know anything about the result of the coin toss – the symbols have no semantic content unless we have previously agreed to a correspondence between {L,R} and “head”,”tail”}. Assuming we have done so, the one bit of information will inform you of the result of the toss.

    Back to Mary. Consider her situation prior to her first exposure to a color other than black or white. When Mary encounters any new object, she will have one of two color-phenomenal experiences – call them “B” and “W”. It is implicit in the original statement of the thought experiment that Mary knows the correspondence between {B,W} and {“black”,”white”}, but for the moment let’s assume she doesn’t. In that case, when encountering a new object Mary has either B or W, and therefore – according to IIT, as I understand it – receives one bit of information. But as in the coin toss scenario, {B,W} by themselves have no semantic content – that requires the correspondence Mary is assumed not to know.

    Now assume she does know that correspondence. Suppose that at Mary’s first exposure to new colors she encounters two objects, one red and one blue. She will have two new color-phenomenal experiences to add to her “symbol set”, say, C and Q. Therefore, when in the future (but prior to additional new color-phenomenal experiences) she sees an object and has one of {B,W,C,Q}, she “receives” two bits of information (subject to assumptions irrelevant to the point). But until someone teaches her the correspondence between {C,Q} and {“red”,”blue”}, those new symbols will have no semantic content.

    And this is the idea that Mike and I were tossing around on a previous thread re Mary. Acquisition of knowledge (ala Sellars) is a social practice (what we called, mimicking Davidson, “triangulating”) that requires three entities: a learner, a community, and the world. And it is only through triangulating that Mary can learn the missing correspondence. In the original statement of the thought experiment, the community component is not mentioned and therefore must be assumed to be missing – ie, Mary confronts her new colorful world alone. So in this sense, when she encounters new colors she acquires information but not knowledge, ie, she learns nothing new – assuming “learns” is taken to mean “acquires new knowledge (ala Sellars)”.

    I’ll leave to others to decide whether – and if so, how – this might be applicable to the IIT. Having spent some years in the distant past doing info theory, I’ve had enough – at least until such time as the skepticism of IIT that Peter and I share is dispelled.

  2. 2. Kar Lee says:

    Peter,
    Perhaps if we rename the “conscious meter” “awareness meter”, many objections will go away? As a not well-defined philosophical concept, consciousness is always something that will elude detection in any scientific way because it is not well-defined. But one is more like to measure the degree of awareness a subject under test has towards its environment from its response. Still tricky, but perhaps more manageable.

    Charles,
    Regarding the requirement of the existence of a community for knowledge: Suppose you are the only person alive. One day you climb up the hill behind your house and you come across a slippery rock which prevents you from climbing higher. You return home and start making a rope for your next attempt. Now, to me, the knowing that there is a piece of slippery rock blocking your path up there is “knowledge” that you did not have before, independent of the existence of a language sharing community. Why is it not true? Why is the association with a language sharing community necessary? Or perhaps this is just another case of different people using the same term “knowledge” to refer to different things?

  3. 3. Charles Wolverton says:

    KL:

    Many of the folksy examples you use in your comments are examples of what Sellars called the “Myth of the Given”, the dispelling of which is the objective of the essay to which I repeatedly refer, “Empiricism & Phil of Mind”. Although difficult (at least for me – even more so, I would guess, for those strongly committed to that myth), it’s only 117 pages. I am not competent to summarize the complex reasoning accurately, but in essence the myth is thinking that by merely seeing (or otherwise sensing) something (eg, a “slippery rock”), you acquire knowledge (as Sellars uses the word) about it. If you read my comment above carefully and with an open mind, you might catch hints of why Sellars called this a myth – why merely seeing something that is “red” does not constitute the acquisition of new knowledge.

    I consider that essay indispensable reading for anyone serious about these topics (which admittedly means nothing, since I’m a nobody – but so have Dennett, Rorty, Brandom, and many other notables) and strongly suggest giving it a try. In the meantime, we simply can’t communicate since our baseline assumptions are incompatible.

  4. 4. Vicente says:

    Well, I see this paper as another small step forward. It basically adds nothing new or consistent about consciousness or qualia but it envisages at a high level a frame, or a possible tool to build NCC’s mappings.

    I went again through “A universe of consciousness” (2001) by Edelman and Tononi, and the fundamentals of the information theory used in the paper were already there… In this sense I find surprising that the notion of reentry is not explicitly raised in this work.

    It seems they have tried to reuse some quantum physics concepts like the states space and entanglement, rearranged in a sort of geometrical (referred as) fashion, I would call it “graphical”. I think they should have used the word “coupled” rather than entangled (fashionable nowadays).

    The author makes at the very end the best analysis that could be possibly made of the paper:

    “…These abstract geometrical (graphical?) notions may seem at first to be far removed from the immediacy of experience…”

    “…Ultimately, however, the goal of the present framework is to offer a principled way to begin translating the seemingly ineffable qualitative properties of experience into the language of mathematics…”

    Mostly important from a scientific point of view, the author acknowledges that a lot of further work is required in order to identify and understand the possible neurological mechanisms that substantiate and support their “idea”.

    I share most of Peter’s ideas, but I confess I like this paper, I believe it is an imaginative approach that could lead to the development of useful tools for future work. Personally, I find in this work ideas, to support my own idea of the brain being a machine that constructs a “data package”, that has to be subsequently used somewhere else in order to produce the phenomenological experience. In this sense, the paper makes no reference to the big problem of the self, or the observer…. once the integrated information processes claimed take place, WHO is informed? who receives the integrated information? or is it that we have integrated information (qualia) packages bouncing around the brain? because if unicity is one of the fundamental characteristics of consciousness, then, at each instant, the single and unique integrated information package created, that fills in the conscious space, has to be received by something. As Charles very well noted, the key point is semantics .

    I agree with KL, replace consciousness by awareness, remove reference s to qualia from the text (i.e. rewrite it), and the paper would look much more scientific. But I like it overall.

  5. 5. John says:

    The term “Consciousness Meter” is just a clever bit of headline grabbing. However, unless you believe that conscious experience is homogeneous your experience will always have heterogeneities that will have arrangements. The possible arrangements constitute a capacity for information and each arrangement is bits of information. There can be no doubt that information theory might be applied to experience, the only doubt is whether this particular model is credible.

  6. 6. Peter says:

    I agree that switching to ‘awareness meter’ would avoid a lot of problems (perhaps, taking John’s point, not so good from the publicity point of view, though still surely worthwhile).

  7. 7. Kar Lee says:

    Charles,
    “…it’s only 117 pages. I am not competent to summarize the complex reasoning accurately..”
    If a person who is as intelligent as you are cannot summarize it in plain English, I feel hopeless to understand it myself. On the other hand, as some statisticians may say, “If the plot looks random, it probably is random.” If it is too complex to describe in plain English, it probably is too complex to be true. As I dig around on the web, I found pieces of attacks on “Myth of the given”. From the surface of it, I can infer that this question is far from being settled.

    Also, allow me to share with you some of the philosophical language that I find impenetrable in Seller’s writing (I hope I am not quoting the wrong writing):

    “We have seen that the fact that a sense content is a datum (if, indeed, there are such facts) will logically imply that someone has non-inferential knowledge only if to say that a sense content is given is contextually defined in terms of non-inferential knowledge of a fact about this sense content.”

    It is just too convoluted for me to follow (thus pardon my “folksy” approach). I can see why you cannot summarize it. But, then, what caused you to believe in it, and come back to it multiple times? There must be some core thing that is so convincing to you. And I would like to hear what it is, if you don’t mind.

  8. 8. Vicente says:

    Charles, I’ve had a look in the Stanford Encyclopaedia of Philosophy at Sellar’s work, and it doesn’t look very indispensable to me, rather a sleeping pill.

    The first thing would be to define very well: knowledge, semantic and information, which is not that easy.

    Then, one thing is to have an experience, and another thing is to understand it. One thing is to see you raising your right hand, that providing you really exist, and we have basic notions of anatomy provides us with new information (knowledge?) about the fact of you raising your hand, and another thing is to know that that means that the result of the coin toss is tails, for which we would need to share the communication code.

    Of course, since we have no direct access to reality, everything becomes a myth (including science), we didn’t need Sellars for that. The point is that not all the myths have the same value.

  9. 9. Charles Wolverton says:

    KL & Vicente:

    Lest I be accused of blatant thread-jacking, before addressing your comments I’ll briefly explain why I thought E&PM possibly relevant to the IIT paper. Knowing that the paper addressed “information” in the technical sense, assuming that when people say “consciousness” – AKA “awareness” – they have in mind something related to “immediacy” (an assumption confirmed by the quote Vicente provided in comment 4, and knowing that “immediacy” is the subject of E&PM, I thought – perhaps wrongly – there might be a connection worth pursuing.

    My replies:

    Though very unlikely to be more “intelligent” than others who would frequent a forum like this, I may well have a more extensive background in abstract math than many. Sellars’ style appears convoluted because he is doing what Vicente suggests: “defin[ing] very well: knowledge” and the other terms he employs, in effect creating a new vocabulary that must be learned. He also writes precisely – as a mathematician might – and the result is a logical flow – expressed in unfamiliar terms – that is complex, convoluted, but also tightly structured – and yes, difficult. Coincidentally, I spent a couple of hours this AM trying to decipher essentially the quote KL provided. The effort was more or less successful, and believe it or not that admittedly opaque quote now appears to make sense. As for “too complex to be true”, try concisely summarizing in “plain English” some abstract math concept – Goedel incompleteness, Galois extension fields, reproducing kernel Hilbert spaces. They, too, require learning a new vocabulary, a tedious and time consuming task. “Nothing ventured, nothing gained.”

    Sellars’ importance was brought to my attention by reading that some contemporary heavy weights single out Quine, Wittgenstein, and Sellars as the three pivotal figures in 20th C phil of mind (Sellars, for various reasons – perhaps including those discussed above – being decidedly the least well-known). To be included in that exalted company suggests that Sellars might have something important to say.

    Finally, “raising hands” was (obviously, or so I would have thought) just a heuristic, my own attempt to introduce a technical concept (the binary symmetric communication channel) in a “folksy” way. It never occurred to me that it could trigger observations about anatomy, ontology, semantics, and epistemology.

  10. 10. Vicente says:

    Charles, yes, no pain no gain, I withdraw the sleeping pill silly comment. But that you used the adjective indispensable and then you referred to great mathematicians, Hilbert (the last great one), and others has made think of: what is indispensable? and that only makes sense when you have set a goal, so, indispensable for what? what is it that I want to know? what do you want to know? because this whole messy garden of philosophy of mind/consciousness is f. stupid and sterile unless we know what we want, is not the case of applied disciplines that at the end of the day serve to make human life easier (well maybe not).Your question made me remember why I got interested in all this, is not that I want to know what is consciousness or how the brain works for the sake of it (which are definitely interesting issues anyway), it is that I want to know: what am I? who am I? where am I going? and those primary questions lead to all the rest. So what is indispensable? anything needed to get the answer. For the time being, “the notables” have not helped much, and as far as the real problem is concerned, those notables know more or less the same as you and I do, which does not apply to other notables in other fields(Hilbert for example), too bad for consciousness field notables… having said this I really appreciate the effort and wisdom of all of them, and all the reflections they have inspired, it is not their fault what happens, and having chosen this field means they are probably all nice guys, like their colleagues at Wall St.

    As time passes I am more and more convinced that ultimately the problem cannot be tackle with intellectual and logical means, that will not trespass the barrier between the neurological (weak) problem and the hard problem. Exploring other strategies like meditation, trying to get direct insight into the issue could be a way forward, I don’t know.

  11. 11. John Davey says:

    It’s a start. Not quite so sure about the preamble – consciousness ‘is’ information – consciousness is a phenomena and computational information is numeric, so the ontologies don’t mix. Nevertheless it makes sense that consciousness has ‘more’ information as a characteristic than unconsciousness. The maths looks right the right approach, along statistical physics lines (although i didn’t see much to say that given x, then y ).

    Maybe it will produce an obvious metric that will be more apt than watching for a snore.

  12. 12. Charles Wolverton says:

    “Not quite so sure about the preamble – consciousness ‘is’ information – consciousness is a phenomena and computational information is numeric, so the ontologies don’t mix.”

    Before reading the paper a bit more carefully, I had the same concern (though being an ontology-phobe, I predictably would have expressed it differently. :-) ). Now I’m not so sure.

    An organism can respond to stimuli in a spectrum of ways ranging from primitive (eg, simple defensive reflex) to very complex (eg, ones we describe something like “making a strategic decision”). And to prepare and execute more complex responses presumably requires more complex sensory inputs and more complex processing of those inputs. In moving through that complexity spectrum, however one defines “consciousness” presumably a point is reached beyond which one would describe some responses as “conscious” rather than merely reflexive, and presumably those highly complex “conscious” responses would require correspondingly complex inputs and processing thereof. One aspect of such processing presumably would be the appearance of the idea of “knowledge”, which in the Sellars sense involves concepts. So, one indication that response processing has moved from dealing only with information (in its semantic-free sense) to involving knowledge might be that the processing deals with concepts.

    And voila! – the paper introduces an entity called a “concept”. But is their use of that word related to its use in the Sellars sense? Surprisingly, I think it actually is. As in several instances, they have taken a familiar idea and renamed it (eg, their “mechanism” appears to be a plain ole “state transition matrix”). As best I can tell, their “concept” is essentially a feature extraction processor (line, edge, color, shape, etc), and I think such features are pretty much what Sellars had in mind with his use of the word. They even address the ability to “learn” new “concepts”, a necessary component of any processor dealing with knowledge.

    So, my skepticism about the paper is significantly reduced, at least to the extent that I understand what they’re doing. If “integrated information” increases with additional or more capable “concepts”, it may well be that in a sense it is a measure of what we think of as “consciousness”. But as John suggests (I think), the “math” is mostly just a bunch of cumbersome definitions, the practical utility and computability of which remain to be shown.

  13. 13. Vicente says:

    An organism can respond to stimuli in a spectrum of ways ranging from primitive (eg, simple defensive reflex) to very complex (eg, ones we describe something like “making a strategic decision”). And to prepare and execute more complex responses presumably requires more complex sensory inputs and more complex processing of those inputs

    Absolutely not, that range does not exist. You are mixing a reflex, that is a purely unconscious process that in many cases don’t even reach the brain, the loop is closed in the cord, with a cognitive high level operation, most probably very conscious and that might not even need sensorial inputs.

    You’ve made the kind of evolutive reasoning start that requires “faith” in the chain, which I don’t have.

    But as John suggests (I think), the “math” is mostly just a bunch of cumbersome definitions, the practical utility and computability of which remain to be shown.

    The problem with those maths (standard information theory, actually better explained in the ref. I gave, same author), is that you have to accept beforehand that the brain works like a computer, massively parallel if you want, but as a computer, and that has to be proven. Have a look at Penrose’s work, those are maths !! He claims, and I pressume he’s right, despite I still need to fully digest his approach, that the brain is not a computer. There is also a page in this blog for that topic.

  14. 14. John Davey says:

    “The problem with those maths (standard information theory, actually better explained in the ref. I gave, same author), is that you have to accept beforehand that the brain works like a computer, massively parallel if you want, but as a computer”

    actually you don’t, although there is a huge difference between acting like a computer and actually being a computer. As I mentioned in my previous comment, it’s a bit like the difference between consciousness being information, as Dr Tononi suggests, and having information as a measurable attribute, as I would be inclined to believe. There is no reason a measurable attribute of a brain can’t be incorporated into a mathematical system. None at all.

  15. 15. Vicente says:

    although there is a huge difference between acting like a computer and actually being a computer

    Come on! unless you are an actor pretending to be a computer, there is no difference in practice. How do you act as a computer without being a computer? or why would that happen? in non trivial cases. The brain acts like a computer but it is not (if it were the case), just for fun.

    The only way you could say that is treating computers as black boxes, and only looking at inputs and outputs. I could accept that maybe some parts of the brain work like computers (analogue?).

    There is no reason a measurable attribute of a brain can’t be incorporated into a mathematical system

    I agree, we can incorporate the number of synapses for example… or the average membrane voltage when depolarised… is it not what they are sort of doing in the blue brain project? down to what level are they going to incorporate attributes? Maybe even considering neurotransmitters opening channels…he he…

  16. 16. Michael Baggot says:

    For a most excellent and straightforward discussion of Sellars check out:
    http://www.iep.utm.edu/sellars/

  17. 17. Charles Wolverton says:

    Responses to Mike’s comments 131-3 on the “Headless Consciousness” thread, which are relevant to this thread:

    Well, Mike, whether it’s a good sign or bad one, your comments make sense to me.

    I didn’t get the impression that the authors intended their (most unfortunately named) “qualia space” to capture “subjective” aspects of qualia, only objective, though subtle, aspects of responses to phenomenal events. To the limited extent that I understand it, their approach appears (in principle, at least) to try to capture, for example, differences in individual responses to a common visual stimulus and the interdependence between an individual’s neurological responses to, say, visual and aural stimuli consequent to a single phenomenal event. But since I’ve never understood what “subjective” aspects qualia are supposed to capture, I may be missing something. Of course, I assume that whatever they intend to be capturing in their equations can’t be “subjective” since equations presumably are “objective” in any relevant sense of the word. (But see note below.)

    I thought a phrase or two in the paper suggested the sort of confusion about color (actually, visual processing in general) that you attribute (rightly or wrongly) to Peter. So, just for my edification if nothing else, I hope you find some time to at least skim the paper for that sort of thing and report back.

    Note: I’m unsure of the significance of the objective-subjective distinction when in these deep waters. Eg, does considering the “emotional” aspects of a phenomenal experience – with that word meant in the Damasio sense of biochemical processes throughout the body – make the analysis more objective or more subjective? Or does “subjective” simply mean “can’t be captured in words” – in which case isn’t trying to discuss a “subjective” topic futile? Or put otherwise, what criteria make responses to a stimulus “objective”?

  18. 18. Charles Wolverton says:

    Thanks to Michael Baggot for the pointer to that Internet Encyclopedia of Phil entry on Sellars, which unlike the Pritchard essay from “Dead Poets Society”, does appear actually to be “most excellent” – and relatively easy to read, especially compared with Sellars’ writing. I especially recommend the essay for those who seem hung up on 1-POV vs 3-POV since it explains how Sellars develops an approach to – if not overcoming, at least dealing with – that issue.

    For reasons that aren’t clear to me, the entry flirts with some ideas that relate to “qualia” (although that term isn’t used), but doesn’t pursue them. And there are statements here and there that one might question. But I would advise anyone who decides to read the essay to focus – at least in a first pass – on the overall picture of Sellars’ approach that the author is trying to paint, not on the fine grain detail. For that, the study guide by Brandom in the edition of E&PM with an introduction by Rorty might be helpful.

  19. 19. Mike Spenard says:

    Perhaps my subjectivity point was a tad to subtle. Tononi & Balduzzi appear to want to find a universal aware-of-redness detector. But for the brief reasons I gave it’s likely misguided. So, /perhaps/ we can call this a kind of subjectivity (which is emphatically not the kind qualia-philes are after), just not subjective in any typical ontological or intuitive sense.

    Another way of putting this point (and the one on a ‘gold standard’) is.. if you were to build a ‘truly’ accurate aware-of-human-redness detector, you would end up with what is a full fledged human. Which means a subject is mandated in ‘detecting’. Hardly what they are after.

    . . .

    “I’m unsure of the significance of the objective-subjective distinction when in these deep waters.”

    Same here. A while ago I was laboring over why it is that blue is the least saturated color, and yellow the most saturated color. (Chromatic vs. achromatic channel ratios). Is this an subjective or an objective thing we are explaining? The distinction, for me, has completely fallen apart. But for many here it is so clear.

    “Or does “subjective” simply mean “can’t be captured in words””
    There is, I think, a rather large distinction between exclusivity (which is the core aspect of subjectivity so strongly longed for) in principle vs. exclusivity in practice. My thoughts above are really advocating what you just suggested, “can’t be captured in words”; which is only to admit of practical exclusivity. It’s not what qualia-philes want (they want exclusivity even if some person knows all there is to know about the physical state of the universe), but it /is/ a real subjectivity of sorts perhaps.

    Anyhow, just putting these ideas out there, I don’t really feel to strongly about them.

  20. 20. Vicente says:

    “Or does “subjective” simply mean “can’t be captured in words””

    No, it means it can’t be “fully” capture by others, because it belongs to the “subject”, in opposition to “objective” that belongs to the object, and therefore can be shared by many (or not?).

    Now, what is an object? does it exist irrespective of the observer, as an object, in terms of concept, not as matter? (this one specially dedicate to ontology-phobes).

    Isn’t it that before many can share “the objective”, it has to become subjective in their minds, and therefore they can’t share it, in a contradictory and absurd manner. I admit I am a bit lost in this subjective/objective game.

    This is the dangerous slippery slope that ends very near solipsism.

    I said “fully” not to put an end to the discussion, and leave room for discussion of this topic, which I believe central in the philosophy of mind.

    I am trying to understand how “capture” can apply, do you mean described? presented? Does anything can be captured by words, even for yourself. I don’t think so, as it was discussed when talking about pain, the feel is decoupled from the word.

  21. 21. Mike Spenard says:

    “Or does “subjective” simply mean “can’t be captured in words””

    I understood this as saying 1) subjective mental terms cannot be stated (satisfactorily to our intuitions) in objective terms, and 2) Charles’s 1st person (subjective) content can only be communicated (“captured in words”) to another person by using mental terms; which means both persons must have the capacity for said mental content; therefore, it might not be possible for Charles to “put into words” the experience of his content to another type of agent (e.g. an alien).

    Hopefully I captured his intent properly in words and it’s not to far off from what he had in mind ;)

  22. 22. Mike Spenard says:

    Or, rather then make a point on the “exclusivity” of subjectivity, we could take it as making a point on the supposed “ineffability” of mental content (not that these are mutually exclusive): E.g., “words fail to capture the sweet repose felt from the cold night lost in the woods”.

  23. 23. Kar Lee says:

    Michael Baggot,
    Thanks for the link to an introduction of Sellars’ idea. This article is quite readable. I also found Sellars’ own writing here : http://www.ditext.com/sellars/epm1.html, through http://www.ditext.com/sellars/epm16.html. It can be read online for free.

    If this introductory article does faithfully reflect Sellars’ idea, then I will like to discuss it further because otherwise it will be a waste of the link.

    1) Difference between thinking and sensing

    While Descartes lumped all mental activities into one single category: thinking, thus leading to “I think, therefore I am”, I believe feeling (or sensing) is the only true mental event. Thinking is in fact happening in the physical domain. The thinking process can even be happening inside a computer, as in a number crunching exercise. It is the accompanying quale (sensing) of thinking that is setting apart the thinking of a conscious being and the thinking of a computer during a calculation. So, in this sense, I agree that Descartes was wrong about that. His famous statement should be modified to “I feel, therefore I am.”

    2) One cannot think without language

    Basically, the term “language” is broadened to include every element in the thinking process. In this sense, this has to be true. In the first pass, I mistook language to mean the regular usage of this word, with semantics, grammar, etc. I even included mathematical equations into the term language. So, I came up with all sorts of reasons that language does not have to be in the thinking process. For example, if you are designing a gear-box, or a simple mechanical fixture to fit into another piece, language, in its normal sense, is the last thing you will use. It is visualization, it is manipulating some image in your head, it is 3-D simulation of physically fitting two things together inside you mind that you will perform, in such an activity. In the regular sense of the word “language”, these are not language. It is just simple visualization. But as I suspect and come to believe in the second pass, this is also what Sellars called “language”. So, in Sellars’ sense, thinking about someone shooting a basketball is part of using the image “language”. With such a broadened sense, thinking is indeed a “language” running in your brain. But this language has hardly anything to do with communications between two individuals.

    3) One cannot sense something without the pre-existence of a concept

    This is necessary false. While yes, once you bought a VW Golf, suddenly you notice there are many more VW on the road than you notice before, a concept help you with your sensing capability, but this the meaning of this “sensing” is not the same one as the one generating qualia. This sense is mean to be “detection”, a word with functional purpose. While the sensing, the feeling part, that is intrinsic, that is qualia generating is completely glossed over. Carrying this argument from Sellars will necessarily ended up in absurdity: I won’t feel any pain unless someone already gave me the concept of pain to begin with. Who is the first human to discover pain, then? Thank you for bringing us all these pains!

    This part is what Charles W. has been bringing into this discussion: You don’t have the knowledge of pain unless it is “triangulated”. Even I strongly disagree, I am keeping an open mind by pointing to this example I read about from Jill Bolte Taylor, the brain scientist who had a stroke. She wrote in her book “My stroke of insight” that her world became black and white after her stroke. At least she did not notice any color. And then, one day, in her recovery process, she was playing with a kindergarten level jigsaw puzzle (even that she had trouble), and her mother commented that she could use the color to help. Out of a sudden, the world turned colorful and it was the first time she saw color in so many months after her stroke.

    Now, this can be taken as a piece of evidence for the claim that you need the corresponding concept in order to be able to feel. However, another possibility is that she had not bothered to notice color after her stroke until that moment. After that moment, she could not recall her color experience before that point and she came to the conclusion that it must have been B/W. This will be similar to a situation when you meet a new person and fail to recall afterward the meeting what color shirt he was wearing, and your brain fills in the blank by inventing that it was white.

    Based on the article that Michael Baggot kindly pointed to, I believe it is wrong to claim that in order to sense something (qualia producing type of sensing, not the detection type of sensing, similar to the distinction between phenomenal consciousness type of sense and access consciousness type of sense) you have to have the concept to begin with.

    The world just shows up for you, whether you have the concept or not, period. You develop the concept after you sense it.

  24. 24. John Davey says:

    “on! unless you are an actor pretending to be a computer, there is no difference in practice. How do you act as a computer without being a computer? or why would that happen? in non trivial cases. The brain acts like a computer but it is not (if it were the case), just for fun. ”

    a computer is an object that fulfils the requirements of the Turing machine. A Dell laptop is a computer. A human being doing long addition is acting like a computer but is evidently not a computer. A brain can appear to be computer like but that does not mean to say it is a computer, i.e that it is a Turing machine. A painting of a duck looks like a duck from a distance, but a painting of a duck is not a duck.

  25. 25. Vicente says:

    John, ok, the language trap again. What is “acting like a computer”? what does “to act like” imply? that is why I said, that it only makes sense if you consider computers and brains as black boxes.

    A painting of a duck entails no action or behaviour. Nevertheless, for an observer located far enough, a painting of a duck and a sleeping still duck are the same, probably because things are often what we perceive (with or without myth of the given). If there’s no observer, the only duck is the one in his own mind (the sleeping still one’s not the picture), that might not include such a concept as duck, so there is no duck at all. Now we are entering the dangerous ontological area, so much beloved for many around….

  26. 26. John Davey says:

    “A painting of a duck entails no action or behaviour.”
    ok. instead of a painting i’ll have a robotic perfect model simulation of a duck. It squawks, flies, even reproduces but does not consist of cells constituted by duck DNA. It looks, walks and talks like a duck but it still isn’t a duck. There is a difference between what something is and what something does.

    You are right in one respect however: computers are observer-relative and have no existence outside of the logical scheme of the Von Neumann computer architecture. Hence computers, unlike brains, do not exist at all in nature. It is innaccurate to say that something is a computer as nothing can be a computer. For the purposes of practicality however this would be a bit tricky, especially for Dell ! Something that is a computer really means ‘a Turing machine capable of nothing other than being a Turing machine’.

  27. 27. Vicente says:

    I have found an interesting article: non-linear dynamics of the brain: emotion and cognition , which I believe sort of complementary to Tononi’s paper under current discussion, and that somehow explores (or suggests a possible line) the neurological foundations that Tononi acknowledges to require large further work.

    http://iopscience.iop.org/1063-7869/53/4/R02

  28. 28. A Brick o’ Qualia: Tononi, Phi, and the Neural Armchair « Three Pound Brain says:

    [...] seem to agree with Peter Rankin’s assessment of IITC on Conscious Entities, which boils down to ‘but red ain’t information’! [...]

  29. 29. sisterfelixe.mihanblog.com says:

    Its like you read my mind! You seem to know a lot about this, like you wrote the book in
    it or something. I think that you can do with a few pics to drive
    the message home a little bit, but instead of that, this is magnificent blog.
    A fantastic read. I’ll definitely be back.

Leave a Reply