A new light on computation

cellular automatonA couple of year back I mentioned some ideas put forward by Mark Muhlestein:  a further development of his thinking has now been published in Cognitive Computation: an accessible version is here.

The paper addresses the question of whether a computational process could support consciousness. Muhlestein begins by briefly recounting some thought-experiments proposed by Maudlin and others. Suppose we run a computation which instantiates consciousness: then we run the same process but remove all the unused conditional branches, so that now the course of the process is fixed. In the second case the computer goes through the same set of states (does it, actually?), but we’d be disinclined to think it could be conscious; it’s just a replay. In fact, it’s hard to see why ‘replaying’ it makes any difference since the relevant states all exist in the record, so that inert record itself probably ought to be conscious. Worse than that, since we could, given a sufficiently weird and arbitrary decoding, read random patterns in rocks as recording the same states, an indefinite number of copies of that same consciousness would be occurring constantly pretty much everywhere.

That’s more or less how the argument runs, anyway, and what’s proposed to fix the problem is the idea that for consciousness to occur, the computation must have the property of counterfactual sensitivity. It must, in other words, have had the capacity to go another way.  Without that property, no consciousness. The notion has a certain intuitive plausibility, I think: we can accept that in order for me to have the experience of seeing something, the fact that it was a red square and not a blue circle must be relevant; and that consciousness perhaps must be part of a stream which is, in some unobjectionable sense, free-flowing.

Muhlestein proposes a new thought experiment with a truly formidable apparatus. He set up a physical implementation of Conway’s Game of Life and uses that in turn to implement a computer on which his processes can be run. Because his implementation of Life uses cellular automata which display and detect states using light, he can now intervene anywhere he likes by simply shining an external light into the process.

If that last paragraph is unintelligible, don’t worry too much: all you need to know for the purposes of the argument is that we have a computer set up so that we have a special power to intervene arbitrarily from outside and constrain or alter the process which is running as it progresses. Muhlestein now takes a conscious computational process (in fact he proposes to scan one on a whole-brain basis from his friend Woody – don’t try this at home, kids!) and runs it on his set-up; but here’s the cunning part: he uses his light controls not to interfere, but to project the same process back onto itself. In effect, he’s over-determining the process: it runs normally by itself, but his lights are enforcing a recorded version of the same process at the same time.

Now, the computation is taken to be conscious when running normally. It runs in exactly the same way when the lights are on; it simply loses the counterfactual sensitivity: it could no longer have gone another way. But why would extra lights on the process deprive it of consciousness? The outputs will be exactly the same, any behaviour of the conscious entity will be the same, and so on. Nothing about the fluidity or coherence of the process changes. Yet if we say it remains conscious, we have to give up the idea that counterfactual sensitivity makes a difference, and then we’re back in difficulty.

What do we say to that? Muhlestein ultimately concludes that there is no satisfactory way out short of concluding that in fact the assumptions are wrong and that computational processes are not sufficient for consciousness.

Old Peter (the version of myself that existed a couple of years ago) thought the problem was really about appropriate causality, though I don’t think he explained himself very cogently. He would rightly have said that bringing counterfactuals into it is dangerous stuff because they are a metaphysical and logical minefield. For him the question would be: do the successive states of the computation cause each other in appropriate ways or do they not? If they do, we may have a conscious process; if the causal relations are disrupted or indirect, we’re merely waving flags or playing with puppets. So in his eyes the absence of counterfactual sensitivity in Muhlestein’s example is not decisive and his lights do not remove the consciousness unless they disrupt the underlying causality of the computation: unless they make a difference, in short. Causal over-determination of the process is irrelevant. The problem for Old Peter is in explaining what kinds of causal relations between states have to obtain for the conscious computation to work. His intuition told him that in real processes, including those in the brain, each event causes the next directly, whereas in simulations the causal path is through a model or a script. Unfortunately we could well argue that this kind of indirect causal path is also typical of computation, leading us back to Muhlestein’s conclusion by a different route.  Myself, I’m no longer  completely sure either that it is a matter of indirect causality or that computational processes are necessarily of that kind.

For myself I should still be inclined towards suspicion of counterfactual sensitivity, but I would be more inclined to say that what we’re missing is the element of intentionality; before we can complete the analysis of the problem we need to know what it is that endows a brain process with meaning and aboutness. The snag with that strategy is that we don’t know.

All in all, I found it interesting to look back a bit. It seems clear to me that over the years this site has been up I have made some distinct progress with consciousness: every year I know a little less about it.

66 thoughts on “A new light on computation

  1. “I have made some distinct progress with consciousness: every year I know a little less about it.”

    Legend has it that a 20 year old young man was shown the form of Tai Chi by the Grandmaster, who happened to be the teacher of this young man’s father, and also the inventor of Tai Chi, both the form and the philosophy. The Grandmaster was celebrating his 90th birthday when a gang of bad guys kicked down the door and threatened to disrupt the proceeding, which prompted the Grandmaster to show this young man the form in the first place. After the grandmaster finished the form, the young man said, “Grandmaster, I have forgotten 10% of what you have just shown me.” The grandmaster was quite impressed, and he said, “let me show you one more time.” After the second time, the young man said, “This time I have forgotten about half of it.” The grandmaster’s face turned a little bit tight, but appeared to be even more impressed, and he said, “let me show you one more time.” This time, the young man reported he had forgotten everything, and the grandmaster was completely ecstatic. He said, “Good boy! Now you have got it. Go and beat up the bad guys with Tai Chi.” And the young man promptly did so with his formless Tai Chi philosophy.

    Every year we know a little less. By the time we know nothing, we get it. Peter, looks like you are definitely on the right track! 😉

  2. But jokes aside, maybe computationalism just does not work? Though I force computationalism on Arnold with Roomba and see how he responds….

  3. Thanks for the encouragement, Kar! I do suspect pure computationalism may have furnished us with a lot of interesting problems that have gradually turned out to be distractions. But what do I know? 🙂

  4. “In fact, it’s hard to see why ‘replaying’ it makes any difference since the relevant states all exist in the record, so that inert record itself probably ought to be conscious.”

    This is the type of confused thinking only made possible by philosophers. ‘Consciousness’ is a physical process, there is no conscious experience in inert objects without the underlying dynamics (and we know of only neurobiological systems with such dynamics so far).

    “Yet if we say it remains conscious, we have to give up the idea that counterfactual sensitivity makes a difference, and then we’re back in difficulty.”

    Counterfactual sensitivity does not make a difference, and giving that up does not create any difficulties. Running the recorded system over and over again, if it were thermodynamically possible (and I’d argue that neurobiological and neuromorphic systems capable of consciousness are stochastic and not exactly completely reproducible to exact certainties) would result in that system consciously experiencing the same thing, given the same environmental and internal signals which is fun to think about theoretically but practically may be infeasible.

    Intentionality and meaning are contextual, and have arisen through millions of years of evolution, having been built upon the repertoire of perceptual and sensorimotor modules within organisms that struggle to survive and reproduce in natural environments and later, in more social environments, where understanding the intentions of others is crucial to survival and propagation.

  5. Furthermore, I think where pontificators go down a wrong path is by treating computation too abstractly. If you concretize computation within the constraints of physics, and take into account the limitations of both the computability and computational complexity of the underlying laws governing the relevant physical systems, you can avoid such pitfalls. Computations are just causal state changes of physical systems, ‘computation’ is just a formal way of talking about such transitions and is a useful (perhaps the only) way of modeling complex systems like the brain. Abandoning platonic thinking and, as Richard Feynman urged, just look at the thing, will be enough to reveal the mystery.

  6. Peter,

    real processes, including those in the brain, each event causes the next directly, whereas in simulations the causal path is through a model or a script.

    Are you sure that your brain is not running scripts quite often? is it not “practical” learning, or skills adquisition, just uploading scripts onto your brain?

    Trying to connect this, with the post about “subliminality”, maybe what you say makes the difference, i.e. script means subconscious, no-script means consciouns.

    Actually, there is probably nothing further away from a script than a pure “random” perceptual (sensorial) states succession, and assuming that you were focusing your attention in that perceptual states series, I can’t picture anything more conscious than that.

  7. Yes, tricky one, Vicente. I feel pretty sure the brain is running scripts quite often; it’s more a matter of scripts not being the underlying or the essential activity.

    As I say, this is an intuition I (or Old Peter) have never quite been able to formulate clearly enough. In a computation we’ve typically got a controlling process which is running in a sequence, and a set of sequential outputs: but the control process can in principle produce any sequence of outputs. The feeling is that at the bottom level of consciousness the control process and the outputs are the same sequence. For that reason a programmed simulation can’t actually be conscious even though it can give the same outputs as a genuinely conscious process – because it’s operating on two levels. (I can see the holes in this even as I write, but that’s the intuition.)

    Why can’t you just produce a program where the control process is also the output? Well, if you’re trying to produce given outputs that would mean the process was already specified. Instead of programming, you’d be left trying to engineer your substrate (your computer of whatever) to produce the right sequence and then somehow get it into exactly the right starting state, which itself would require some very complex preliminaries.

    Cutting to the chase, I think this would mean you can’t program conscious states, you have to “grow” them, but again I can’t put this into any rigorously argued form.

  8. Our hips, knees and ankles are perhaps the most important joints in our bodies because our legs would have no flexibility without them; but the majority of the time the joints are performing the same function as the rest of our leg bones which is supporting our weight. So we can say the joints have the duality of both support and flexibility.

    I don’t care if you call all those fixed nerves that flow through our spinal cord and “blow up” in the base of our brain as our central nervous, sympathetic or autonomic nervous system, the brain is analagous to our joints since it gives our being flexibility but at the same time shares the same functionality as the spinal nervous system.

    The essence of the mind-body dilemma is we get locked into “brainitis” and the necessary systems of rules which the brain naturally builds in and hence fail to grasp the brain-body integration.

  9. Conway’s Game of Life is actually a very interesting analogue for Muhlestein to use simply because it raises the question of whether that other product of matter-organized-in-a-certain-way, *life*, can be so emulated. What happens when we substitute LIFE for CONSCIOUSNESS in these thought experiments? Can an instance of life be ‘replayed’ by stacking the proximal material organizational deck and forcing a certain natural process to unfold in an almost identical way?

    Of course it can. It happens all the time. We call it reproduction.

    Note how this forecloses on the problem of panpsychism suggested by the possibility of a conscious computational replay: the possibility of ‘patterns’ that can be ‘read into’ other material organizations in no way suggests ‘panvitalism,’ so it’s unclear why this should be the case with that fellow emergent product, consciousness. As a matter of empirical fact, the material organizational constraints that life requires are strict enough to assure that it remains localized. It seems safe to assume that the same holds for consciousness.

    In other words, Muhlestein’s disjunct (either contextual sensitivity is required or panpsychism is true) assumes that the material organizational constraints pertaining to consciousness are far, far more promiscuous than those pertaining to life. Given that all the instances of consciousness that we *know* of require life, this has to be considered the long odds bet.

    For me, these kinds of issues simply illustrate the conceptual perils of functionalism: the way the dualism of ‘program’ and ‘implementation’ has the tendency of turning consciousness into the slipperiest of fish. Who knows what kind of machinery can implement the ‘consciousness program’? Analogizing consciousness to life, on the other hand, continually reminds us of the *specificity* of the machinery required, and how, as Arnold is continually reminding us, the consciousness devil lies in the *empirical* details – the concepts be damned!

  10. You do not need consciousness when you are awake, do not forget that either.

    Do you need consciousness for anything at all?

  11. Vicente,

    That is if you are an automaton.

    If someone makes this statement: “All conscious beings are automatons, but not all automatons are conscious beings.” What is the difference between the two? (Aside from Arnold’s Retinoid system, which if can be precisely specified, can help decide what meat one should eat and what meat one should not eat, does help draw a line in the sand for some vegetarians to get back into the meat business, despite its philosophical shortcomings similar to the line draw for people over 18 to vote and people under 18 not to vote).

    If someone makes another statement: “Every conscious response is only a summation of many small reflexes”, can it be shown physiologically? If it can be shown, then all pragmatic discussions of consciousness should be thrown out of the window because you don’t need consciousness in any practical way because reflexes don’t need consciousness. That goes back to what you said: “You do not need consciousness when you are awake, do not forget that either.”

  12. Vicente & Kar: There’s at least three possibilities, no?

    1) Conscious awareness performs no functions.
    2) Conscious awareness performs functions.
    3) Conscious awareness performs functions that escape conscious awareness.

    Think of this problem in terms of information. Of the estimated 38 000 trillion operations performed by the brain, only a miniscule fraction makes it to conscious awareness. Since conscious awareness receives little or no information regarding its myriad neurofunctional contexts, it *has* to be the wheel that confuses itself for the whole machine.

    Personally this is the thing I find the most difficult to keep in mind when rummaging around my own experience (as experience): the way it’s a neurofunctionally blind neurofunctional interstice. *Of course* it seems to be the wheel that does not turn. Evolution had neither the time nor the caloric resources to gerrymander a conscious system capable of tracking the vast informatic complexities of the greater brain. So it made due with the bottleneck a short history of random mutations provided.

  13. Sorry, I was just being a bit provocative…

    A few day ago there was a link in dynamic roll to a paper regarding the possibility of consciousness being a sprandrel (evolutive concept, sort of exaptation)…

    Let’s forget about the philosophy of mind and philosophical zombies for a moment, and be a bit more prosaic. The point is that you need something for something… the moment you introduce a need, you have to set a goal, or fix a target, something,…

    What’s our goal? survival, just hedonic pleasure, reproduction… no goal at all.

    Is there a goal in biology… (in general)? if there is a goal in biology, does consciousness share it? according to Arnold, it completely does.

    So, what do you need consciousness for? or should we rephrase it, what’s consciousness for? for the sake of coherence.

    It is funny, the highest the level of consciousness you reach, the more detached from biology you are, in general the inclination to cling to things or follow instincts diminishes… one becomes sort of contemplative in a way.

    I anticipate the criticism, what do I mean by level of consciousness? well, overall understanding of the situation and deprogramming. I accept, beforehand, I am not building on solid rock.

    What’s consciousness for? what’s the goal?

  14. I think we evolved consciousness as an economy supervisor/manager of our physical actions in the outside world. Otherwise we would run purely on the actions from senses, using excessive energy without any ability to confer with our memory, like some creatures who are swept along through their lives in their environment they evolved from.
    I agree that some humans do not seem to need consciousness when they are awake but I think you do Vicente. Some living things with a fixed cyclical environment do not need brains at all, let alone consciousness.

  15. The link for that paper is: http://philosophyofbrains.com/2012/06/28/is-consciousness-a-spandrel.aspx

    ‘Goal,’ remember, is an intentional concept. In strict functional terms the question is simply, What does human consciousness *do*? Environments provide various possibilities for exploitation, niches, that then select those mutations that allow organisms to gradually grow into them. Consciousness isn’t ‘for’ anything, though it was selected because it either enabled our ancestors’ brains to more effectively exploit niches, or is a byproduct of something that enabled them to do so.

    For me, the notion that the information found in consciousness is functionless makes no sense. Everything we experience has some neurofunctional role which may or may not subserve some broader envirofunctional (exaptational or adaptational) role. You need to see consciousness as a kind of privative crossroads, the point at which a myriad of far, far larger neurofunctional chains find themselves integrated. The point is that these neurofunctional chains in no way exist for consciousness, not even as ‘missing.’ Experience, which is to say, the information we do get, fills the entire screen – as it must.

    This is one of the things that makes consciousness so damnably difficult. In processing terms, it’s a mere fragment that has to – in some sense – mistake itself for everything.

    But if it’s almost entirely oblivious to its neurofunctional roles, a kind of informatic cartoon wrapped in a bubble, there’s still the question of its envirofunctional role. Since language requires the brain to make neural information available for aural or visual linear coding, and since so much psychological research suggests that the deliberative cognition characteristic of consciousness is primarily geared to social signalling, my pet *guess* is that human conscious awareness is a kind of inter-neural interface.

    At the risk of dating myself, I’d say we’re the modem.

  16. Scott et al.

    Yes, but all these considerations perfectly apply without phenomenal consciousness, and without subjectivity, well with no more subjectivity that the one that Roomba’s got.

    Richard,

    I think we evolved consciousness as an economy supervisor/manager of our physical actions in the outside world. Otherwise we would run purely on the actions from senses, using excessive energy

    Well energy consumption optimisation would be a trait favoured by evolution at all levels, without the need of consciousness. Surprisingly, the effect of advanced consciousness on the energy management performed by modern human societies is difficult to determine. Human psychological traits probably evolve to cope with scarce resources and don’t handle very well abundance. So it seems that low conscious states, were evolved instincts command, eg: a crocodile, a shark, an ant or a cow, are more efficient. Change senses for sensors, and it works. The thing, Richard, is that life as a physical system, needs not phenomenal consciousness at all. If ultimately theories like Dawkins’ selfish gene are ruling the game, that’s it.

    Scott,

    I see your point. Could it be that then you might enter into an infinite recursion. If you had the networks that monitor the networks responsible for consciousness, then, how would you be aware of the output of the former, you would need some additional networks to monitor the networks that enable consciousness, and so on…

    It would be an internal perception process, like you need the visual cortex to process the visual input, you would need some extra cortex to process the activity of the cortex producing consciousness (that you are perceiving).

    At some point you need the intrinsic observer, and that leads to your problem, that the latest stage can’t be self-aware, producing the bubble or entire screen filling effect you mention. The visual cortex produces phenomenal vision somehow, without ancilliary cortex.

    This kind of paradox raises the moment to take a functional approach, like you did.

    For me, the notion that the information found in consciousness is functionless makes no sense.

    I intuitively and emotionally agree, but if that information is coded in synaptical topology and dynamic firing, why should we add anything…

    Probably you feel, as I do, that the concept itself of information, is a conscious concept, beyond material substrates. But how to present this in scientific terms. There are theories in which the Universe can be described in terms of informational units and structures, maybe that’s the key, and Kar’s Universal CPU is the answer, and we are information structures flowing in time….

  17. Vicente: “at some point you need an intrinsic observer…”

    Actually, this needs to be ammended to, “at some point you need an explanation of an ‘intrinsic observer.'” I agree entirely, and this is exactly what I’m giving you – an explanation – though you need to swamp around in the details a while before this becomes clear. The answer to “Why should we add anything?” is simply anosognosia: subjectivity, and all the peculiarities that pertain to it, is just what we get when we turn cognitive systems primarily designed to track external natural and social enviroments to the problem of phenomenal consciousness. It like asking a child accustomed to tracking objects *within* their visual field to track objects *larger* than their visual field: the latter ‘object’ has to seem motionless, perhaps even ‘transcendent.’ The child makes predictable mistakes… As do we.

    This, I’m saying, is subjectivity. The blindness of conscious subsystems to their neurofunctional contexts generates a number of ‘perspectival’ illusions that cannot be seen through: the sense that we somehow lie outside the causal order, that we are ‘aimed’ at the world instead of a product of it, that we are constituted by some kind of ‘special stuff,’ and so on.

    Is it simply a coincidence that 1) consciousness is (as a matter of empirical fact) informatically parochial; while 2) information privation can explain away so many peculiarities of consciousness? I *fear* not.

    So again, I pose the question I’ve been plaguing you and Arnold with for these past several weeks: If we are forced to rely on cognitive systems primarily designed to track external natural and social enviroments (and it seems that we are), should we not assume that information privation regarding phenomenal consciousness will lead these systems to make the same kinds of mistakes in instances of environmental information privation?

    I think the obvious answer is yes. The fact that this answer allows you to parsimoniously explain away the central structural features of subjectivity – unity, presence, aboutness – simply gives us (frankly, terrifying) solid abductive grounds for serious consideration. Moreover, it pares the Hard Problem down to a single, much more empirically tractable mystery.

    The brain doesn’t seem to need the ‘cartoon bubble’ we call subjectivity because it is a strange kind of cognitive illusion. On the BBT account, the most troubling thing about zombies is the way they insist that they’re people!

  18. Damn – a single typo destroyed the phrasing of my question: If we are forced to rely on cognitive systems primarily designed to track external natural and social enviroments (and it seems that we are), should we not assume that information privation regarding phenomenal consciousness will lead these systems to make the same kinds of mistakes *AS* in instances of environmental information privation?

  19. Energy—>Generator—>EMF—>Motor—->Energy

    Energy—>Qualia (generated via neural firing)—>Neural Firing (generated via qualia patterns)—>Consciousness

    Is the brain actually a qualia detector which generates its own qualia and neural patterns by detecting the more basic qualia generated by the body?

    Control and feedback.

  20. Scott: “So again, I pose the question I’ve been plaguing you and Arnold with for these past several weeks: If we are forced to rely on cognitive systems primarily designed to track external natural and social enviroments (and it seems that we are), should we not assume that information privation regarding phenomenal consciousness will lead these systems to make the same kinds of mistakes in instances of environmental information privation?”

    Yes, but notice that “information privation” does not prevent people’s cognitive systems from accepting the scientific explanation that the solid floor they stand on is really just a cloud of subatomic particles with fundamental forces. Why is there such resistance to the scientific claim that your phenomenal consciousness is really just the operation of particular kinds of neuronal mechanisms in your brain? If it were only a matter of information privation (brain blindness) there should be a similar rejection of the scientific explanation for the observed solidity of the floor on the basis of information privation (subatomic-particle blindness). It seems to me that the problem is primarily one of having sufficient grounds for *belief*. People BELIEVE that the floor can be a cloud of subatomic particles because science has a well-developed theoretical model that *predicts* the solidity of the floor! I suggest that when the general population is aware that science has a well-developed theoretical brain model that successfully predicts salient aspects of their phenomenal consciousness, brain-blindness/information-privation will not prevent the general *belief* that phenomenal consciousness is the operation of particular kinds of neuronal mechanisms in one’s brain.

  21. Scott,
    “subjectivity, and all the peculiarities that pertain to it, is just what we get when we turn cognitive systems primarily designed to track external natural and social enviroments to the problem of phenomenal consciousness.”

    I understand what you mean. But this statement indicates that we are not discussing the same thing even though we are using the same words.

    In my definition, or in my usage, subjectivity = phenomenal consciousness.

    So, if I use my definition, your statement becomes: “subjectivity, and all the peculiarities that pertain to it, is just what we get when we turn cognitive systems primarily designed to track external natural and social enviroments to the problem of subjectivity.”

    And if I try harder to understand it in my own way, the very first question I encounter is “Whose subjectivity are we talking about? Whose phenomenal consciousness are we talking about.”

    Let’s call the conscious being who phenomenal consciousness is being discussed Mr. A., then the statement becomes “Mr. A’s subjectivity, and all the peculiarities that pertain to it, is just what we get when we turn cognitive systems primarily designed to track external natural and social enviroments to the problem of Mr. A’s phenomenal consciousness.” The second question is: If we do not turn Mr. A’s cognitive system to Mr. A’s phenomenal consciousness, will Mr. A still be conscious?

    This highlights the kind of mis-communication we have using the same words, but meaning different “things”.

    But because I do understand what you are saying, I understand that we have mis-communicated.

  22. Kar,

    I am close to your view. But, could it be that subjectivity is much related to the self feeling, that emerges in the scenary built with phenomenal perception building blocks. So the equation needs another term:

    [2]Subjectivity = phenomenal consciousness + self-feeling (with I! locus too).

    Again, self-feeling constitutes part of the phenomenal universe. So both terms are not independent, [2] is rather a clarifying decomposition than an independent equation.

    will Mr. A still be conscious? Not in the way we are.

    Consciousness has to be understood as a whole, as the inner Universe that constitutes our existence, with all its components and possible expressions and states.

    Mr. A is conscious as long as he exists on his own devices.

  23. Vicente,
    Might what you called “phenomenal consciousness + self feeling” be just a finer division within phenomenal consciousness similar to what Chalmers called the 1st order, 2nd order and 3rd order phenomenal judgments? To me, phenomenal consciousness already include a self, though it is sometimes not explicitly stated, as is in cases of the first order phenomenal judgments.

  24. Kar: “Even though simulated rain cannot make you wet, a simulated brain can think.”

    I disagree. I would put it this way:

    Simulated rain has no wetness. A simulated brain has no thought.

    Thinking about something in the world requires subjectivity. A computer simulation of something in the world has no subjectivity.

  25. Arnold,
    I was not even thinking of subjectivity. Just strictly from a functionalist standpoint (or scientific standpoint, if you will), if you can simulate a brain, it works like a brain, otherwise it is not a good simulation.

    Just take the concept of a virtual machine in computer science, people run emulators on top of the Apple operating systems, so that a Windows application can run on top of that emulation (the emulation is in fact a simulation of the Windows environment) “thinking” that it is running on a Windows platform. As far as the application is concerned, the emulation is the real thing. Same for the brain, if you can simulate a brain, you have a brain, period.

    Another example, if I use my Windows machine to simulate an oscilloscope using the right input signals on the right extension boards, I have an oscilloscope. The simulation itself is a real thing, a real scope.

    If you have come across something called FPGA (field programmable gate array), you probably realize that you don’t have to hold that piece of hardware in your hand to have a FPGA. You can simulate it in software, and the output is exactly the same as the original FPGA you are holding in your hand. The simulation itself is a FPGA. The simulation itself is the real thing.

    In fact, for all signal processing devices, the simulated version is the real thing in itself. There is nothing so controversial about it.

    In circuit design, it often comes to a point when you need to decide if you want to implement a certain feature with hardware or with software. To implement it in hardware, you build the hardware. To implement the same feature in software, you do it by simulating the hardware using the software, and the result is identical. In signal processing, the simulation itself is the real thing. Oh…by the way, the simulated software keyboard that comes with your iPad is a keyboard, is it not?

  26. Kar, when Kasparov played Deep Blue, he was thinking about the position of the chess pieces on the board in front of him. Deep Blue wasn’t thinking about anything; it was just crunching bits according to a chess playing program devised by thinking people. The fact that some logical aspects of Deep Blue’s chess algorithm were similar to Kasparov’s thinking does not mean that the computer was thinking.

  27. Kar: To the third person it looks and “feels” like a Windows OS whether its running on an Apple platform, actual Windows platform or a zillion bedbugs trained to simulate an OS. But the point is your third person sees, feels and derives meaning. The Apple and Window platforms have no biological subjectivity though each individual bedbug may feel something akin to an off-on state transition which is the nearest they get to meaning.

  28. Arnold,
    In reality, I am a proponent of the “Chinese room” argument (I have it in “Where are the zombies?” if you need the proof). So, I know what you are saying. But let me just dig in my heels, and side with the computationalists for the moment and insist: The computer was thinking, but just not like when you are thinking. Like an airplane does fly, but just not in the way a bird does. What will you say?

    It is not that I have no doubt about computationalism, but just that if you can simulate a brain, then the simulated version is a brain. Just like a simulated software keyboard is a keyboard. However, whether a brain can be adequately simulated or not is another matter. But if the Stanford people successfully simulated a bacterium, then we may have just moved a giant step towards simulating a brain.

  29. Flipping your lid: I would not take the example that far to make implication on subjectivity. It is only an example showing that there are situations when the simulations are also the real things.

  30. Kar: “The computer was thinking, but just not like when you are thinking. Like an airplane does fly, but just not in the way a bird does. What will you say?”

    An airplane flies, but a computer simulation of an airplane in flight does NOT fly. A person thinks, but a computer simulation of a person thinking does NOT think.

    Computer simulations do not realize the events that they simulate. However, simulations are valuable for providing proof of concept. For example, in *The Cognitive Brain*, Ch. 12, “Self-Directed Learning in a Complex Environment”, I present a number of computer simulations of a minimal retinoid and synaptic matrix system which demonstrate that its biological machinery can represent the global structure of a novel environment, target, parse, and classify (learn) objects within the global phenomenal scene, recall and image the learned objects, learn the common names of each object, and successfully search for and find arbitrarily named objects. The computer simulation itself does not perform any of these biological functions, but it does demonstrate that the theoretical brain model that was simulated CAN perform these functions. See here:

    http://www.people.umass.edu/trehub/thecognitivebrain/chapter12.pdf

  31. Arnold,
    You missed the point. It is like this:
    An airplane flies, a bird flies, but an airplane does not fly like a bird. A computer thinks, a brain thinks, but a computer does not think like a brain.

    I will be offline for the next few days.

  32. Arnold, Kar, Vicente: Check out, http://pan-pl.academia.edu/MarcinMi%C5%82kowski/Papers/1690649/Beyond_Formal_Structure_A_Mechanistic_Perspective_on_Computation_and_Implementation

    This Marcin Milkowski guy has a brilliant future, I think. I’m with Arnold on the simulation front: the question is one of how tightly consciousness is bound to a specific form of material organization. You don’t need to be a ‘neurochauvinist,’ as Schwitzgebel puts it, to think that it is an emergent product of something very, very specific.

  33. Arnold: “Yes, but notice that “information privation” does not prevent people’s cognitive systems from accepting the scientific explanation that the solid floor they stand on is really just a cloud of subatomic particles with fundamental forces. Why is there such resistance to the scientific claim that your phenomenal consciousness is really just the operation of particular kinds of neuronal mechanisms in your brain? If it were only a matter of information privation (brain blindness) there should be a similar rejection of the scientific explanation for the observed solidity of the floor on the basis of information privation (subatomic-particle blindness). It seems to me that the problem is primarily one of having sufficient grounds for *belief*. People BELIEVE that the floor can be a cloud of subatomic particles because science has a well-developed theoretical model that *predicts* the solidity of the floor! I suggest that when the general population is aware that science has a well-developed theoretical brain model that successfully predicts salient aspects of their phenomenal consciousness, brain-blindness/information-privation will not prevent the general *belief* that phenomenal consciousness is the operation of particular kinds of neuronal mechanisms in one’s brain.”

    There is a BIG difference, though, Arnold. To resort to my favourite example, the motionless earth: we initially think the earth is motionless pending the information made available by science, then we understand that it only seems to be motionless. I would suggest that our cognitive systems have evolved to correct for such *environmental* perspectival illusions. We are continually confronted with instances of environmental information privation, and so possess the capacity to readily update cognition via alternate sources of information. This ability is paramount to our survival.

    But in the case of conscious awareness, the situation is quite different. In this case we’re talking about a *neural* perspectival illusion. Environmental cognition is literally designed to be sensitive to information privation (SMTT is a perfect example of this), to continually maximize what little information we get. We intuitively appreciate that there is always ‘more than what meets the eye.’ Phenomenal cognition, on the contrary, is utterly insensitive to information privation. Our intuitions cut in precisely the opposite direction, which is why so many are so inclined to think their phenomenology has to be the MOST certain thing, the most informatically ‘sufficient,’ to use my own jargon.

    Why? Because the conscious subsystems of the brain simply do not receive any information regarding their neurofunctional context. The greater neuro-informatic context of conscious awareness does not exist for cognition the way the greater enviro-informatic context does.

    That all said, Arnold, you still haven’t actually answered my question! If cognition is prone to confusion in cases of environmental information privation (the way, last night for instance, I walked up to a stranger that I had confused for my brother-in-law because of the dark), should it not suffer versions of those same confusions in cases of neural information privation – which is to say, when we reflect on consciousness?

  34. Kar: “So, if I use my definition, your statement becomes: “subjectivity, and all the peculiarities that pertain to it, is just what we get when we turn cognitive systems primarily designed to track external natural and social enviroments to the problem of subjectivity.””

    But you do appreciate the need to be flexible with our definitions, to always take a stipulative mindset. After all, the proper definition is the one thing we’re all gunning for! It *could* be the case that consciousness is quite real, but that subjectivity is not.

    Thus the importance of my question: You seem to be espousing what Kahnemann would call a WYSIATI (What-You-See-Is-All-There-Is) perspective. I’m asking why? Given that you are using cognitive systems that are regularly baffled and mistaken in instances of information privation, and given 1) that *drastic* information privation characterizes consciousness, and 2) that you are stranded with same cognitive systems, why have any faith in any of your intuitions regarding ‘subjectivity’?

    To assume that subjectivity and consciousness are the same thing is to assume that your intuitions access all the information they need. This strikes me as wildly optimistic, particularly given all we’ve learned about the unreliability of those intuitions. Eric Schwitzgebel’s Perplexities of Consciousness gives a great overview of just how bad our intuitions regarding phenomenal consciousness are…

    So to tailor the question specifically for you, Kar: What warrants the assumption that your conscious subsystems access all the information required for your cognitive subsystems (primarily designed to exploit external environments) to make heads or tails of the information received?

    I think we can make a compelling case that you are systematically deceived. This certainly seems to be the way the evidence is trending, and it has the virtue of whitting down the Hard Problem to a single question.

  35. Scott: “That all said, Arnold, you still haven’t actually answered my question! If cognition is prone to confusion in cases of environmental information privation ….. should it not suffer versions of those same confusions in cases of neural information privation – which is to say, when we reflect on consciousness?”

    If by *cognitive confusion* you mean *episodes of error in belief*, then I certainly agree that we can have the same kind of mistaken belief about consciousness as we have about events in the environment. Where we might differ is in our expectations about a change in belief about consciousness. Just as the advance of science has changed common belief about the motion of heavenly bodies in relation to our earth, I think scientific evidence will change the common belief that conscious experience cannot be the biological activity of the brain. You seem to have a more pessimistic view about this. Am I wrong?

  36. An ECU is of little use by itself in its box as is a brain without the to and fro interaction of information and action of the body with all that includes.
    Although I do not profess to know exactly how consciousness works, I consider it very narrow minded of anyone who cannot see how useful awareness linked to memory could evolve to self-awareness. In fact, it must be virtually impossible to evolve self-awareness without eventually becoming conscious of it. This with an exaptation to imagination and phenonmenal consciousness combined with our dexterity and resilence to cope with various environments, we can outsmart other creatures and as a result are at the top of the food chain.

  37. ” A computer thinks, a brain thinks, but a computer does not think like a brain.””

    I think this is true with one exception : a computer doesn’t actually think. We think, and give computational aspect to computers. Computers are lumps of matter whose extent – what pieces and lumps of electronics constitute the computer – are decided by us, not mother nature. The lumps of matter we, as humans, decide are computers, sit doing nothing more than follow the laws of physics.

  38. Scott[37], let’s continue our discussion. First of all, I am not sure if *drastic information privation* characterizes consciousness. In the case that you mistook a stranger as your brother-in-law, it just says our minds live in a representational world. There is an internal representation of the external world, and whenever there is a gap in our senses, the brain patches it over with the internal representation, in which there appears to be no gaps (you will not notice your blind spots unless you spend effort to demonstrate their existence). So, to use your terminology, if there is *information privation*, the brain resorts to the internal representation and fill in the gap. However, internally, say, the feeling of pain, what information is the brain being deprived of? What information privation? Are you referring to the neuron activities that correlate with the pain, or are you saying that I cannot be certain that I am in pain? If I am in pain, I am in pain, period. I know there are cases when people are confused about whether they are hurting or not. Being punctured by a needle is really not a big deal for *all* people, if they are not aware of the penetration *visually*. Mosquito bite, big deal. But there are a group of people who just cannot withstand looking at themselves being punctured by a needle. The “hurt” comes from the belief that it is indeed hurtful. Once someone overcomes this psychological barrier, the hurt feeling is not there any more. So one can argue that the subject is being confused by the feeling. But when the subject feels hurt, the hurting is real, whether it is a psychological effect or is a signal coming from the point of puncture. The brain is not being deprived of any information. If it hurts, it hurts. So, I am not sure if *information privation* characterizes consciousness.

    Second, subjectivity does not come from faith or intuition. It is just being used in the way that is consistent with its meaning.

    Give me an example of consciousness in which subjectivity is not involved. How will you use the word “consciousness” in a way that subjectivity is explicitly excluded? In my case, I cannot think of an example in which I will use the word “consciousness” without implicitly referring to the subjectivity aspect, except in the case of access-consciousness. But we are talking about phenomenal consciousness.

    On the other hand, when I say subjectivity, I always mean the subjectivity of a conscious being, the subjectivity that comes with this first person view of being a conscious being. So, subjectivity implies consciousness, and consciousness implies subjectivity, the two are the same the way I use them.

    Regarding “What warrants the assumption that your conscious subsystems access all the information required for your cognitive subsystems (primarily designed to exploit external environments) to make heads or tails of the information received?” I think you are probably mixing up the information in a third person world and the “information” in the first person world. In a third person world, your gut’s movement contains a lot of information, which the brain might receive, but you, the mind, is unaware of (is that what you mean by information privation?). But this bear no difficulty at all, because these are third person information and you are feel to make up any imaginable mechanism how these information help the guy move. In fact, you can being talking about your friend’s gut and you will still be deprived of those information. But so what? Just like there are a lot of information inside the engine of the car carrying you that your mind is not aware of, what is the problem? In asking this question, you are probably assuming that the first person information, such as the feeling of stomach ache *is* part of the third person information such as the signal traveling inside the nerve system, but it isn’t. First person information is not part of the third person information. They only correlate. That’s why people keep talking about neural correlates. So, from my point of view, there is not information privation whatsoever. Consciousness is not what you imagine yourself being due to the lack of information, consciousness is what you are. There is no information privation.

  39. Kar: So you think our faculty of introspection has 1) total access to all the requisite information; and 2) is in no way dependent on environmentally oriented cognitive systems evolved long before human consciousness?

    I find ‘Just-So’ stories like these quite implausible. If you assent to (1) you need to explain all the findings that establish the, in some cases, dumbfounding unreliability of our introspective reports. If you assent to (2) then you need to explain why evolution, in the case of human consciousness, neglected to do what it seems to do everywhere else, namely, maximize *existing* resources to gerrymander reproductive advantages.

    Both of these strike me as tall, tall orders. If you concede (1) is false, on the one hand, then information privation becomes an issue you need to address. If you concede (2) is false, then the problem of cognitive misappropriation, the possibility that the consciousness we *think* we have is an artifact of cognitive systems just not equipped to deal with conscious experience in an intuitively satisfying way.

    You talk of ‘filling in’ and the way the brain represents the absence of information is part and parcel of something I call the ‘Accomplishment Fallacy,’ the notion that everything we attribute to consciousness requires some kind of device to be produced. Consider visual attention and the structure of focus (the region of maximal information), fringe (the region of minimal information), and margin (the point where visual information runs out). Given that the margin is a salient structural feature of visual experiences, yours, mine, everyone’s, what would the ‘neural correlates’ of this experiential structure be? In other words, would you say the brain has ‘no-visual-information-beyond-this-point’ neurons to represent the absence of representation?

    Otherwise, I’m not sure how evoking the first and third person distinction – which is to say, the very distinction we’re trying to explain – to disallow the possibility that information privation can be used to explain this distinction. It just begs the question, perhaps renders it impossible to answer in principle! The simplest thing to ask, Is where does the information expressed in consciousness come from? Not consciousness, certainly.

    But to answer your question about providing an example of consciousness minus subjectivity: You. Me. Everyone we have ever known. That’s the possibility that, as terrifying and repugnant as it is, humanity is about to face up to, at least if the research continues its pessimistic trends. We have no reason to think science will rescue this one, most cherished conceit, given the way it has so reliably dispatched with so many others.

    You use the notorious pain argument to block any attempt to insert an appearance/reality distinction within experience, the suggestion being there is no way for us to be ‘wrong’ about what we experience. But the research shouts otherwise. Just consider pain asymbolia, which seems to prove that pain is actually an experiential *composite,* a sensation that can be broken down in intuitively bizarre ways. Do you experience pain as a composite? I’m guessing not. Doesn’t this mean that your experience of pain is missing some important information?

  40. Scott,
    I think I am really talking to a philosopher now, because I don’t quite follow what you are saying in the first pass. So I will try more passes. But I was also looking for the answer to my question and here it is:
    “But to answer your question about providing an example of consciousness minus subjectivity: You.Me. Everyone we have ever known.”

    Can you elaborate?

  41. Scott,
    Another thing I need clarification:”then you need to explain why evolution, in the case of human consciousness, neglected to do what it seems to do everywhere else, namely, maximize *existing* resources to gerrymander reproductive advantages.”

    What is it that you try to say? What does “reproductive advantages” have anything to do with consciousness?

  42. Kar, I believe you can have consciousness and subjectivity decoupled. When you are deeply concentrated on a problem, or focused on an observation or a task. On real time, you forget about yourself. Later on, when you recall the experience, you realise that, you being there, is just an illusion of memory retrieval.

    Now I’m referring to psychological subjectivity, not a geometrical point of view (Arnold’s I! Reference in the reti. Sp.)

  43. Kar: Sorry about the wank. I’m a novelist by profession, philosopher by training. Since I don’t have any institutional cred to piss away to begin with, and since I think consciousness research is a paradigmatic example of trying to run with our shoes tied together, my primary interest lies in finding ways to drastically reconceptualize the problem. Philosophers squint at my questions as much as anyone!

    Are you familiar with Dennett’s arguments against ‘original intentionality’? I’m saying something (roughly) similar to what he would say, only without his faith in pragmatism. I’m saying subjectivity is something we *attribute* to complicated systems, not something that those systems ‘have’ in any ontological sense.

    Once you look at the problem in terms of what kinds of information available to the conscious subsystems of the brain, you realize quite quickly that consciousness is privy to very, very little of the 38 petaflops we think the unconscious gut brain processes every second. The reason I called your story ‘just so’ is that it assumes quite a bit: that consciousness, though limited to a miniscule fraction of the brain’s information, gets just the right amount of information structured in the right way. As far as I’m concerned, this has been decisively, empirically refuted. As we might expect, given that consciousness is a recent evolutionary twist, much of our experience (most notoriously, our ‘feeling of willing’) seems to consist of deceptive kluges. We’re talking about a relatively young set of mutations, which means they will likely be messy, often leading to reproductive success (incorporation into our genome) in spite of themselves.

    Look how messy ‘culture’ is, all the apparent, from a raw survival/utility standpoint, waste that it generates. Consciousness, which is arguably the condition of this, is messy, ad hoc in the same general way. This is why, I’m suggesting, it’s so hard to comprehend: the very cognitive short cuts forced upon it by genetic serendipity and reproductive exigency have left us peering through a fundamentally skewed lense, one that works well enough when applied to environmental information, but which falls flat on its face when confronted with the interoceptive dregs. A brain, in other words, that is blind to itself *as a brain,* and so cooked up another way of looking at itself using those meagre ingredients available: the ghoulash we call ‘subjectivity.’

  44. Vicente,
    Hmmm…interesting example. So, in that scenario, is it subjectivity that is missing or consciousness that is missing? Real time.

  45. Vicente: “I believe you can have consciousness and subjectivity decoupled … I’m referring to psychological subjectivity..”

    Yes, I don’t think that consciousness and subjectivity (an active I!) are ever decoupled. What you call “psychological subjectivity”, I would say is thinking *about* one’s self in some way (I will, I want, I am, etc.), and this is often absent in our stream of conscious content.

  46. Scott,
    “I’m saying subjectivity is something we *attribute* to complicated systems, not something that those systems have in any ontological sense.”

    I think you are absolutely right about that. But I want to add that the *complicated systems* here refers to all systems other than yourself. Your own subjectivity, that is, your existence in the sense that you have this first person view, is not something you attribute to yourself, but something you know. (back to self knowledge again! But if not, what does it mean by “your existence” if you don’t have this first-person view? Can you imagine your existence without a first person view? If you can, what make you think that particular physical body is you?)

    I don’t care how many neuron signals that is going around in “my” head that I am not aware of, the fact that I have a point of view, my subjectivity, is a personal fact. It is in this sense I use the word subjectivity and equate it with consciousness, my own consciousness. Another person’s subjectivity/ consciousness is just my own model of how this other person should work, putting myself in that person’s shoes, so to speak.

    Your focus has been on other people. My focus has been on myself. That is the crucial difference.

    In this debate, there are typically two camps: Those who believe consciousness has something to do with evolution, and those who decouple the two. I will argue that the kind of consciousness that one camp focuses on is not the same kind of consciousness the other focuses on. Let’s look at the claim that consciousness is a result of evolution. Basically, it says in the long history of evolution, from single cell, which is presumably unconscious, to human, which is presumably conscious, something happen to the DNA of one intermediate species, and the new offspring of this intermediate species developed “consciousness” (whatever that means). There are two ways to look at this scenario.

    1) The embryos that later develop into the offspring are already mutated. Let’s compare one offspring with its parents. What kinds of differences will researchers find between this offspring and its parents to conclude that this offspring is “conscious” while its parents are not? It will be through its behavior and body structure. However, what kinds of behavioral differences and body structural differences can you find to draw such a conclusion? From today’s science, nothing. We cannot even say conclusively a frog has no consciousness while a dog has. Unless you are talking about the parents being two trees while the offspring being a human. But we are talking about evolution. Evolution is, presumably, evolving slowly, from one set of similarities to another set of similarities. So, if evolution does create “consciousness”, this consciousness has to be a gray scaled concept, such as the offspring being more conscious than the parents. It is in this sense that the word “consciousness” can being used. More conscious means being more capable of responding to environmental changes in a more advantageous way, or to use computationalist’s description: more capable of integrating “information”. But this is hardly the same concept of phenomenal consciousness which is an “either or” concept that the other camp refers to. In the other camp, either you have p-consciousness or you don’t. No middle ground. Either qualia exist or they don’t. No middle ground. When qualia exist, there is subjectivity. The very definition of qualia is subjectivity.

    2) The one embryo that later develops into one offspring is not mutated in the beginning, but the mutation occurs later in its life and it passes it on. So, the offspring starts off just like its parents, unconscious. But at some point in its life, it “wakes up”, and behaves differently. What kinds of differences will researchers find before and after the transformation? Before the transformation, it acts just like its parents. After the transformation, it acts differently. But what constitutes “being conscious” for the researchers? More intelligence? More capable of solving problems? A more connected brain? The fact that we are still debating whether cockroaches are conscious says it all. The researchers cannot say. The only one thing the researchers can say is, after the transformation, the offspring seems to be more able to solve problems. Again, this put the concept of consciousness in a gray scale rather than in a on on/off status. Again, it is a rather different kind of consciousness than the phenomenal consciousness that the other camp is talking about.

    Now, here is how this somewhat scientific gray-scaled consciousness concept gets mixed up with the on-off phenomenal consciousness that generated so much confusion. Grey-scaled consciousness advocates attempt to apply this concept onto themselves, and come to this conclusion: What I am sleepwalking (unconsciously), I am less able to handle things as well as I can when I am conscious. So, consciousness must serve a critical function, such as better information integration and so on and so forth. By mistakenly connecting this gray scaled concept, which is a pure third person concept similar to a physician’s determination of whether a patient is full conscious or only half conscious or 1/10 conscious, and a somewhat scientific concept, to one’s own first person experience in phenomenal consciousness, which is a pure subjective one, unprovable to everybody else except you yourself, the gray scaled consciousness advocates conclude that even this personal phenomenal consciousness serves a function, without noticing that there is a mixed up identity.

    But let me tell you, for your body to perform, this personal phenomenal consciousness is completely unnecessary. It is completely false that you can do things better in normal state than in a sleepwalking state. An athlete performs best when he lets go of his body. Even in sports that require decision making such as boxing, when your body takes over, you are just a bystander looking from the inside out, observing how your body beats up your opponent, or be beaten up while you watch helplessly. A pianist plays best when he lets go of his fingers. The thinking that you need phenomenal consciousness to do anything is just an illusion. The body takes care of everything. The physical brain takes care of everything, just as a physical brain should. But it is just curious that for some activities that “your” body performs, there is this quale thing that pops into your world. (Why else do you identify one particular body as your own? Because sometimes when it does something, qualia pop up for you). Your body, a physical system that follow physical laws, is like a roller coaster that follows its own track, will follow its own action based on its structure and physical laws. The concept that it needs something that is called “consciousness” to function better is just as illusory as the illusion of free will. I keep sayings that if your gut can digest a piece of pizza without your knowing it, your brain should be able to make up its mind and act on it without your knowing it. If one insist that phenomenal consciousness is a result of evolution, and that it somehow serve a purpose, he is mixing up this gray-scaled “consciousness” (which is only a third person model) with phenomenal consciousness which is a completely personal boolean thing.

  47. Kar: “But I want to add that the *complicated systems* here refers to all systems other than yourself. Your own subjectivity, that is, your existence in the sense that you have this first person view, is not something you attribute to yourself, but something you know.”

    I’m curious about this ‘all other systems but yourself’ statement, because it implies that you don’t agree with my statement at all. If I know that you know intentional consciousness is real for you, then I’m attributing consciousness to you, acting *as if* you were an intentional agent, I’m *recognizing* that you are.

    You definitely need to check out Perplexities of Consciousness, Kar. It turns out that we know a lot less than we think we do! But then this has been the consistent trend of cognitive neuroscience: the gradual undermining of various intuitive verities. Things are just not what they seem with subjectivity.

    This comes back to haunt evolutionary debates on the function of consciousness. If we do have a blinkered perspective on our first-person perspective, then perhaps it should come as no surprise that what we intuit doesn’t seem to fit with what we know – that consciousness seems like the ‘wheel that does not turn’ when we take a biomechanistic perspective.

    I understand that it *feels* as if your introspective knowledge of conscious experience is apodictic, but it seems pretty clear that this is due to some kind of neglect: lacking information regarding the lack of information makes it impossible for your brain to realize it doesn’t possess all the information required. The question you need to answer is simple: Absent interoceptive information regarding the incompleteness of the interoceptive information available for introspection, how can you know that you *know much of anything at all* about the properties of consciousness?

    Are you giving me a ‘just so’ story here? Saying that introspection, despite all the structural and developmental constraints it faces, gets *exactly* the information it needs to cognize consciousness *as it is*? Even without the growing mountain of contrary empirical data, this strikes me as implausible.

    Or are you giving me a ‘just enough’ story? Saying that introspection gets *enough* information it needs to cognize what consciousness is *more or less.*

    I have a *not enough* story. I think we are all but blind, that introspection is nothing but a keyhole glimpse that seems as wide as the sky simply because it lacks any information regarding the lock and door. I’m saying that we attribute subjectivity to ourselves as well as to others, not because we actually have subjectivity, but because it’s the best we can manage given the fragmentary information we got.

  48. Scott,
    How can I know anything at all? I think you are absolutely right again, except for one point. Then, of course, “it implies that you don’t agree with my statement at all”. 😉

    I think I know the word Scott starts with an “S”. I saw it on the screen. In fact, it is my intuition that the word Scott starts with an “S” when I see that word. But it cannot be right because there is so much that I am ignorant of. When I look at the screen, when I see an “S” followed by some other stuff, the image is a result of megaflops of calculations that I have no knowledge of, billions of electrons interacting with each other residing inside the liquid crystal molecules, twisting and turning, scattering light all over the place, some coherently, some incoherently all coming in from the backlight, a result of the marvelous solid state physics that makes the LEDs glow. Occasionally, a virtual pair of strange quarks pops up here and there inside the LEDs, producing some short-lived but definitely observable effect if you care to look. All these, all these, are out of my immediate perception and knowledge base while I am looking. When all these are going on, completely unknown to me, I intuit that the thing I see on the screen is an “S”, all based on my limited quale of “seeing an S” and nothing else. How can I be right? I am completely information deprived. I am in a state of information privation. My knowledge of what I saw, even though it is first-hand, has to be somehow compromised. After all, it is only one quale vs trillions of unknowns. So what I think I know must be something I make up under the state of information privation. I am pretty sure that the word I saw must not be “S”, even though I have the quale of seeing an “S”…unless…..unless someone confirm it to me…but then I will be just having some audio quale which I intuit to be a human voice, while unknown to me are these vibrating molecules in the air, after millions of collisions, reaching my auditory canal … ok… 2 qualia vs trillion of unknowns…..Naa…

  49. Kar: In a sense you’re making my point *for me* with the example you choose. Your reductio turns on a strawman. I’m not making the radical skeptic’s claim at all. In other words, I’m not making a claim against the possibility of knowledge in principle. All I’m saying is that there is a relationship between the information available for cognition and the reliability of that cognition. Sure you don’t deny this.

    You see the S and know what it is with ease because your brain possesses ancient and powerful systems for cognizing visual environmental information. The question I’m asking you is quite simple: does your brain possess ancient and powerful systems for cognizing *it’s own neural information*? I think the answer is an obvious, No.

    How would you answer the following questions?

    1) Are you denying there is is a relationship between the information available for cognition, the capacities of the systems involved (whether, for example, they are primarily adapted to tracking our external environments), and the reliability of that cognition?

    2) Are you denying that our environmental cognitive faculties possess a far longer evolutionary pedigree (in all likelihood by an order of hundreds of millions of years) than our introspective cognitive faculties?

    3) Does this not suggest that ‘introspective cognition’ should be far less reliable than, say, ‘visual cognition’?

    4) Might this not explain why we find consciousness so difficult to explain?

  50. Scott,
    One question summarizes all questions you raised:
    When you are lying in a hospital bed groaning painfully because your stomach hurts like hell, and a doctor rolls in a machine and tells you that you are scientifically not in pain according to the machine, what do you think?

    But let me directly respond to your questions so that we are not chasing each others tails:
    1) Yes.
    2) No. Human’s ability to recognize pain should be as old as, if not older than, his ability to run away from predators.
    3) No, see 2)
    4) The implication is not clear.

    Another point, even if 2) were a yes: Michael Phelps is only 27. I have been swimming far longer than he has existed. But guess what? His “newly” developed swimming ability is far more reliable than mine. Newer is not necessarily less reliable, not even in evolution.

  51. Kar: I’m always puzzled by how quickly people reach for pain when being pressed on the reliability of introspection. I never said introspection provided *no* information, only that it provides low resolution information, and no information regarding neurofunctional contexts.

    So regarding (2), are you saying that dogs have the capacity to reflect on their experience? Because that is what we’re discussing, the capacity to introspect, to draw conscious experience into our field of attentional awareness, not the brute capacity to experience.

    Regarding the sudden evolution of novel, fully formed and highly tuned capabilities, there may be exceptions (I know of none), but surely they only serve to reinforce the rule. Given this, my account is the more likely one. Adaptation requires mutations, which statistically occur at certain frequencies over long periods of time. The more time, the more adaptations. The more adaptations, the greater the capacity. You’re not denying this, are you?

  52. Scott,
    Pain is a very good example because “pain” drove you to quickly write a response…see? (Let’s not worry about pain being composite; cyan is composite, but we perceive it as one color….not a problem)

    On “only that it provides low resolution information”. Well, qualia is qualia because they are not quantifiable, or else they will be quantia. If you use high resolution low resolution to describe qualia, you may be shouting up a different tree.

    A dog certainly knows it is in pain, whether is it a result of introspection or not I cannot say. It knows, to the very least, to lick its wound when it hurts.

    “Because that is what we’re discussing, the capacity to introspect, to draw conscious experience into our field of attentional awareness, not the brute capacity to experience.”
    I have long suspected that we are not talking about the same thing. This is the proof. I am talking about the question of why qualia exist, why subjectivity exists for me. I am not talking about my ability to introspect. Qualia is not a result of introspection. They just pop up.

  53. Kar: “I have long suspected that we are not talking about the same thing. This is the proof. I am talking about the question of why qualia exist, why subjectivity exists for me. I am not talking about my ability to introspect. Qualia is not a result of introspection.”

    Read Perplexities. What qualia ‘are’ and what we ‘think’ them to be are two different things, aren’t they? Lot’s of people think lots of different things. Until we have some sense of what we are explaining, then how will we know if and when we have an accurate explanation? Do you agree with this much? If no one can agree about a thing, it actually tells us something important about that thing. All I’m doing is explaining that lack of consensus in informatic terms: we can’t agree because we can’t access the information required to compel agreement. How else should we tackle this problem?

    I understand you want your introspective intuitions to be self-evident, but they simply are not. The same way pain doesn’t bear information regarding its composite structure, all of your ‘quale’ are likewise missing information. The simple question is whether we should expect that missing information to play a role in our ability to cognize what quale are. I see this question as pretty straightforward, and important, but you seem to want to unask it. Why?

    Does a dog have introspective knowledge of its pain? This strikes me as pretty implausible, given that even our nearest relatives exhibit only meagre reflective capabilities. What evidence do you have? And if it’s not ‘introspective knowledge’ you’re talking about, then how could you be talking about human experience anymore?

  54. Scott: “What qualia ‘are’ and what we ‘think’ them to be are two different things, aren’t they?”

    When we claim that qualia *are* such and such, we describe qualia in 3rd-person objective (brain?) terms. When we claim that *our* qualia *are* such and such, we describe qualia in 1st-person subjective (what it feels like) terms. The 1p and 3p claims necessarily occupy separate domains of description. In order to explain 1p descriptions in terms of 3p descriptions, we need a scientifically productive bridging principle. This is why I have proposed the following principle:

    *For any instance of conscious content there is a corresponding analog in the biophysical state of the brain*

    The scientific problem is to find brain mechanisms that can generate proper analogs of salient conscious content. This is what I claim the retinoid system accomplishes.

  55. Scott: “What qualia ‘are’ and what we ‘think’ them to be are two different things, aren’t they?”

    When we claim that qualia *are* such and such, we describe qualia in 3rd-person objective (brain?) terms. When we claim that *our* qualia *are* such and such, we describe qualia in 1st-person subjective (what it feels like) terms. The 1p and 3p claims necessarily occupy separate domains of description. In order to explain 1p descriptions in terms of 3p descriptions, we need a scientifically productive bridging principle. This is why I have proposed the following principle:

    *For any instance of conscious content there is a corresponding analog in the biophysical state of the brain*

    The scientific problem is to find brain mechanisms that can generate proper analogs of salient conscious content. This is what I claim the retinoid system accomplishes.

  56. Arnold: If only it were so simple. If we all could agree on our first-person descriptions, then we would have something reliable to correlate with our third person descriptions. The question here is one of the reliability of introspective ‘cognition.’ As Eric details in Perplexities of Consciousness, introspective psychology foundered because so many of our first-person descriptions seemed to vary so wildly from person to person. Finding neural correlates is well and fine, so long as the terms being correlated are well defined. Short of this, something like the retinoid system can only be the correlate of X, whatever it is we are introspecting. I’m arguing that if you diagnose this problem in terms of information and the kinds of mistakes we are inclined to make when we lack the requisite information, we have a way of seeing consciousness that substantially closes the kind of insuperable gulf that Kar and Vicente think obtain between the third and first persons.

    My last post elaborates this problem in greater detail: http://rsbakker.wordpress.com/2012/08/09/error-consciousness-part-one-the-smell-of-experience/

  57. Scott,

    Yes, we can be mistaken about details of our experience. Our introspections can be influenced in all kinds of ways; by failure of memory, by the influence of faulty belief, by current fashion, etc. But these are trivial details when we confront what has to be explained if we are to understand how the brain gives us our conscious experience. I am constantly amazed at how blithely we accept our normal experience of being in a surrounding world. This is a truly astonishing fact and it is the essence of consciousness! We have no sensory apparatus to detect the space we live in, yet HERE WE ARE with feelings in us and stuff and events all around us! This is the universal phenomenon that a science of consciousness has to explain. Keep your eye on this — the fundamental perplexity of consciousness.

  58. The greatest theoretical merit of your theory, I think anyway, Arnold, is the economical way it explains the ‘surround.’ I’m not convinced this is the ‘essence’ of consciousness, however, and like Peter I worry about the ‘Cartesian theatricality’ of your approach. (In my own case, I find I suffer the bad habit of thinking the things the Blind Brain Theory most elegantly explain are the most important things to be explained!) And I still don’t understand how the retinoid accounts for the indexicality of ‘hereness…’

    I actually think you need *my theory* to do that! So it goes… 😉

  59. Scott,

    1. The “theater of consciousness” is just a metaphor for retinoid space. I haven’t seen a cogent argument against the human brain having this kind of egocentric representation of the world around us.

    2. We all suffer a fondness for our own theories. An explanation for this can be found in *The Cognitive Brain*.

    3. I think I know what the “indexicality of ‘hereness'” means, but I am not a philosopher so I’m not sure. As I understand it, I would claim that the egocentric structure and dynamics of retinoid space, coupled to the pre-conscious mechanisms of the cognitive brain, account for the indexicality of hereness. I would feel more secure on this point if you would elaborate on what you mean by “the indexicality of hereness”.

  60. Scott,
    So, let’s talk about perplexities…
    Let’s talk about the example in which people whose faces are approaching a wall, and feeling that their faces are sensing the approaching wall, but in fact, it was the auditory system that sensed the proximity of the wall through echoes.

    Now, are they wrong about their feelings? Are they so ignorant of themselves that they cannot be trusted? No, not at all. If they feel that it was the face that was sensing the approaching wall, the feeling is still real. It is just the interpretation of the feeling that was wrong. Just like phantom limbs, even though there is no limb, but if shaving the face gives you the feeling that your lost limb is being touched, your feeling that the lost limb is being touched is still real. Have you ever have skin itch on one arm, but when you scratch the arm, the itch did not go away? Because it is not the itchy spot? Then you just by chance scratch on a spot not on the arm and the itch goes away. How wrong! But wait. You may be wrong about the location of the itch, but the feeling that the itch come from somewhere is still real. You cannot be wrong about your feelings, even though you can be wrong about the physical interpretations of those first person feelings in terms of third person facts.

  61. Kar: I thought we covered this ground. I never denied that people have conscious experiences, Kar, just that their introspective interpretations are anything but ‘neutral observations.’ Consider ‘arousal theory,’ and the way we rationalize what seems to be some kind of inchoate physiological affect with our post hoc interpretations. The point I’m making is that introspection pretty clearly *transforms* experience as much as ‘intuits’ it. Feeling is simply not ‘given’ the way it seems to be.

    Like I say, this is pretty uncontroversial stuff.

    So the problem becomes: If reflecting on conscious experience transforms conscious experience, how do we sort our interpreting from what is interpreted? It’s not a matter of ‘being wrong about our feelings’ as it is understanding what those feelings are independent of our ‘introspective spin.’

  62. We all suffer a fondness for our own theories. An explanation for this can be found in *The Cognitive Brain*.

Leave a Reply

Your email address will not be published. Required fields are marked *