Picture: Macaque. Can monkeys have blindsight? Sean Allen-Hermanson defends the idea in a recent JCS paper. Blindsight is one of those remarkable phenomena that ought to be a key to understanding conscious perception; but somehow we can never quite manage to agree how the key should be turned.  Blindsight, to put it very briefly, is where subjects with certain kinds of brain lesions deny seeing something, but can reliably point to it when prompted to try. It’s as though the speaking, self-conscious part of the brain is blind, but some other part, well capable of directing the hand when given a chance, can see as well as ever.

There are a number of ways we might account for blindsight. One of the simplest is to suppose that the visual system is degraded but not destroyed in these cases; the signals from the eye are still getting through, but in some way at reduced power. This reduced power level puts them below the limit required for entry into conscious awareness, but they are still sufficient to bias the subject towards the correct response when they are prompted to guess or have a random try. Another popular theory suggests that the effect arises because there are two separate visual channels, only one of which is knocked out in blindsight. There is a good neurological story which can be told in support of this theory, which weighs strongly in its favour;  against it, there have been reports of analogous phenomena in the case of other senses, where it is harder to sustain the idea of physically separate channels. Allen-Hermanson cites claims for touch, smell and hearing (I’ve wondered in the past whether the celebrated deaf percussionist Evelyn Glennie might be an example of “deafhearing”); and even suggestions that the case of alexithymia, in which things not consciously perceived nevertheless cause anxiety or fear, might be similar.  It’s possible, of course, that blindsight itself comes in more than one form with more than one kind of cause, and that there is something in both these theories – which unfortunately would make matters all the more difficult to elucidate.

For those of us whose main interest in is consciousness, blindsight holds out the tantalising possibility of an experimental route into the mystery of qualia, of what it is for there to be a way something looks.  It’s tempting to suppose that what is missing in blindsight patients is indeed phenomenal experience. Like the much-discussed zombies, they receive the data from their senses and are able to act on it, but have no actual experience.  So if we can work out how blindsight works, we’ve naturalised qualia and the hard problem is cracked…

Well, no, of course it isn’t really that easy. The point about qualia, strictly interpreted, is that they don’t cause actions;  qualia-free zombies behave just the same as normal people, and that includes speech behaviour. So the absence of qualia could have no effect on what you say;  since whatever blindsight patients are missing does affect what they say, it can’t be qualia.  Moreover we have no conclusive evidence that blindsight patients have no visual experience; it could be that they have the experience but are simply unable to report it. That might seem a strange state to be in, but patients with brain damage are known to assert or confabulate all sorts of things which are at odds with the evidence of their senses; in fact I believe there are subjects who claim with every sign of sincerity to see perfectly when in fact they are demonstrably blind, which is a nice reversal of the blindsight case.

Still, blindsight is a tantalising glimpse of something important about conscious experience, and has all sorts of implications. To pick out one at random, it casts an interesting light on split-brain patients.  In blindsight cases, we can have an apparent disconnect between the knowledge the patient expresses with the voice, and the knowledge expressed with the hand; that’s pretty much what we get in many experiments on split-brain patients (since normally only one hemisphere has use of the vocal apparatus and the other can only express itself by hand movements).  Any claims that split-brain patients are therefore shown to be two different people in a single skull are undercut unless we’re willing to take up the unlikely position that blindsight patients are also split people.

One interesting extension of blindsight research is the apparent discovery by Cowey and Stoerig of the same phenomenon in monkeys. There is an obvious difficulty here, since human blindsight experiments typically rely on the subject to report in words what they can see, something monkeys can’t do. Cowey and Stoerig devised two experiments; in the first the monkeys were trained to touch a screen where a stimulus appeared; all were able to do this without problems. In the second experiment, the stimulus did not always appear on cue; when it did not, the monkeys were required to press a separate button. Normal monkeys could do this without difficulty, but monkeys with lesions thought to be analogous to those causing blindsight now went wrong when the stimulus appeared in their blind spot, hitting the ‘no stimulus’  button. Taking the two experiments together, it was concluded that blindsight was effectively demonstrated; the damaged monkeys who could earlier touch the right part of the screen even when the stimulus was in their blind spot,  later ‘reported’ the same stimulus as absent.

(Readers may wonder about the ethical propriety of damaging the brains of living primates for these experiments; I haven’t read the original papers, but I suppose we must assume that at any rate the experiments had medical as well as merely philosophical value.)

Of course, these experiments differ significantly from those carried out on human subjects, and as Allen-Hermanson reports,  reasonable doubts were subsequently raised in a 2006 paper by Mole and Kelly, who pointed out that relying on two separate experiments, which made differing demands, made the results inconclusive.  In particular, the second task was more complex than the first, and it could plausibly be argued that the result of having to deal with this additional complexity was that the monkeys simply failed to notice in the second experiment the stimulus they had picked up successfully in the first.

Allen-Hermanson’s aim is to rescue Cowey and Stoerig’s conclusions, while acknowledging the validity of the criticisms. He proposes a new experiment: first the monkeys are trained to press a green button if there is a stimulus (no need to point to where it is any more), and a red one if there is none. Then we introduce two different stimuli: Xs and Os. Both the green and red buttons are now divided in two, one side labelled for X, the other for O. If there is a stimulus, the monkeys must now press either green X or green O depending on which appeared: if there is no stimulus, they can press either red button. Allen-Hermanson believes the blindsighted monkeys will consistently press red X correctly if the stimulus is X, even though they are effectively asserting that there is no stimulus.

Maybe. I can’t help feeling that all the monkeys will be puzzled by a task which effectively asks them to state whether a stimulus is present, and then, if not present, say whether it was an X or O. The experiment has not been carried out; but Allen-Hermanson goes on to suggest that Mole and Kelly’s alternative hypothesis is actually implausible on other grounds.  On their interpretation, for example, the blindsighted monkeys simply fail to notice a stimulus in their blind spot: yet it has been demonstrated that they cannot recognise objects as salient in monkey terms as ripe fruit when they are presented to the blind spot – so it seems unlikely that we’re dealing with something as simple as inattention.

What would it mean if monkeys did have blindsight? It would seem to show, at least, that monkeys are not automata; that they do have something which corresponds to at least one important variety of human consciousness. Allen-Hermanson proposes working further along the mammalian line, and he seems to expect that mammals and even some other vertebrates would yield similar results (he draws the line at toads).

At any rate, we’re left feeling that human consciousness is not as unique as it might have seemed.  I can’t help also feeling more strongly than before that the really unique feature of human awareness is the way it is shot through with language; we may not have the only form of consciousness, but we certainly seem to have the talkiest.


  1. 1. Vicente says:

    A post in the core business ! thank you Peter.

    One question, for what reason human consciousness seemed to you unique?

    Maybe human intelligence, reasoning, awareness, understanding etc, are unique on planet Earth, I think they are, but why human consciousness.

  2. 2. Peter says:

    I don’t know, Vicente – humans seem to have a more complex inner life than any other animal, don’t they? Dogs don’t write novels (alright, there might be other reasons for that). Some have argued that animals don’t feel pain the way we do, though I’m not convinced by that. To me, it’s an open question whether our consciousness is different in kind from that of another primate, or merely different in the degree of development of certain faculties. What do you think?

  3. 3. Vicente says:

    I believe all higher order animals share the faculty to have conscious experience. Probably lower orders too. The understanding and interpretation of the experience ranks within a large variety, depending on the capacity to construct a model of the world, and the role the individual plays in that model (self awareness). Humans are by far number one in the ranking.

    If I may use this analogy, a chimp, a human and a parrot can all have a look at an encyclopedia or watch a movie, the conscious experience of the images of each page, or letter is very much alike, but the awareness and understanding of the experience differs a lot.

    In a similar fashion, even our human conscious experience is quite different in dreaming or alert states for example, but consciousness itself is in nature the same.

    So as you said our inner life is much more complex than dogs one, but to have an inner life is a boolean parameter in the first place, you have it or you don’t, and then you could question if there are different kinds, degrees or see how much you get out of it.

  4. 4. Vicente says:

    A second point, is that I see a close relation between blindsight, and the experiments in which some visual information is presented to a subject during such a short time than he is not aware of it, still the brain can make use of the information later on.

    There must be some autonomous systems that process visual information very fast, behind the awareness of the subject. Probably those systems are involved in triggering the reflex to close the eye when something approaches it fast enough.

  5. 5. Lloyd Rice says:

    Peter: You raise an interesting issue in your paragr 4 involving the definition of qualia. I agree that Chalmers used the word as you have it, a creature with no qualia, but identical in all other respects. However, that definition always seemed to me to be well into the realm of the impossible and useless (except to philosophers, I suppose). I believe the word has also been used in a more plausible way (or maybe there’s another word for it) meaning any animate entity that does not experience consciousness. If such a creature had language, you could certainly tell it was a zombie by the way it described itself.

    Vicente: You seem to be saying the consciousness is boolean, either there or not, and yet could have different degrees. I would argue that the image of a typeset page as seen by the chimp, human, and parrot is actually quite different. Because the human would sense the word meanings, there would be differences in the saccadic scans and corresponding differences in the perception of the page.

  6. 6. Peter says:

    Lloyd, yes, the strict definition of qualia does indeed to lead to apparently impossible difficulties (among other things, none of the talk about qualia could actually have been caused by them), which is why some say we should stop talking about them. On the other hand, if you water it down, there may still be something interesting to say, but it isn’t really the ‘hard problem’ any more. There is a large and ever-growing range of zombies to suit all tastes in the literature of course, including ones along the lines you mention.

  7. 7. Vicente says:

    Lloyd, what I am saying that has different degrees is the interpretation or understanding of the conscious experience by the “observer”. Actually the understanding itself is part of the conscious experience in quite a bizarre recursive way.

    You analyse and question how such a thing as qualia happen to exist in a material world. A monkey or a cow, I am sure that just have a “naive realistic” approach to their inner lifes.

    Peter: “…none of the talk about qualia could actually have been caused by them…”, could it be that if you say “directly caused” you solve the difficulty. If a subject observes an object and talk about it, could you say that the talk has been strictly caused by the object, no, to really say that, it must happen that the observer was forced unavoidably to do that. In any case, you say that the object indirectly caused the observer’s talk.

    So in the case of qualia there is no direct cause-effect between there existence and talking about them. Actually 99,99999% of human beings have never talked about qualia themselves.

    Nevertheless, personally, I think qualia are one of the main drivers of our behaviour, directly or indirectly.

  8. 8. Vicente says:

    It might be worth going through “A Spectrum of Consciosness” previous blog, in which an interesting relation with GWT is mentioned.

  9. 9. Kar Lee says:

    I believe when Peter said “none of the talk about qualia could actually have been caused by them”, he was referring to what David Chalmers called “the third order phenomenal judgement”.

    If we assume that the physical world is causally closed, then anything happened in the mental domain has to be explanatory irrelevant, or epiphenomenal. Thus, qualia could not have caused anything that has physical effects. The act, by a human, of talking about qualia can only be caused by the brain’s physical processes, which is not qualia. So, even a zombie can talk about qualia (same brain processes), even though the zombie does not have them. I think Peter has a posting on a paper by Elitzur on “bafflement”, and at the time, I was very impressed with Elitzur’s argument that the physical world is not causally closed. Or did I misunderstand your comment?

  10. 10. Vicente says:

    Kar Lee, I think you understood it well. What I am saying is that we can make a difference between a direct cause, and an indirect cause. Particularly, in the physical world cause-effect mechanisms should be well established.

    I don’t agree that if you talk about something, that “something” is the cause of your talk.

    In the case of D. Chalmers 1st, 2nd and 3rd Judgements, I see it a bit as stating the obvious.

    What does it really mean that the Universe is “causally” closed?

    What is definitely true is how baffling is this.

    In the case of blindsight the subject is making use of an information out of his conscious field (no qualia attached), still used in terms of qualia. Baffling.

  11. 11. Vicente says:

    Actually, I think Elitzur has a dualistic interactionist view of the whole thing (or not?), to which I am getting quite closer. The problem is that you just take a step forward to find a similar problem.

  12. 12. Vicente says:

    Kar Lee: This is taken from Elitzur’s paper:

    “…A philosopher writes a book about qualia, discussing their enigmatic nature in great detail, and then states that he would write exactly the same book had he lacked qualia!
    But why on Earth should zombies express bafflement about qualia if
    they don’t have any? After all, Chalmers professes physicalism, by which there is a cause for anything zombies say. If the cause of the talk about qualia is not qualia themselves, what is it? As he himself frankly asks…”

    I agree with Elitzur overall, but I don’t think the content of a talk is the cause of a talk.

  13. 13. Peter says:

    There’s certainly something in that point about content and cause, Vicente – if our talk is caused by the things we talk about, how is it we can talk about things that don’t exist? All the same, it seems as if causality must be in there somewhere: when I shout ‘look at that fox!’ I like to think the fox stood in some causal relation to the utterance.

    And it’s a little uncomfortable at least, as Kar says, to think of zombies talking about qualia in just the same way as we do. They say they’re talking about things they’ve experienced, but ex hypothesi, they haven’t experienced those things. That can’t help but undercut normal people’s claims about qualia somewhat.

  14. 14. Kar Lee says:

    In principle, zombies can have blindsight too, can’t they? That is getting really complicated now…

  15. 15. Peter says:

    Yes, I suppose so, in that they could point to a target correctly while plausibly denying that they could see it. There would be a problem if you defined blindsight so that it required the ability to have actual phenomenal visual experience: some might want to do that, but I don’t think it’s normal.

  16. 16. Vicente says:

    Peter: “…when I shout ‘look at that fox!’ I like to think the fox stood in some causal relation to the utterance…”

    The question is: did you have the choice to not shout, or the did the presence of the fox completely forced you to shout.

    What I think is that the fox plus many other concurrent factors just triggered a process that led to your shout in that case.

    If I give you a big price for just describing the room you are in, would you do it? in that case what is the cause of your talk?

    I think content and cause are quite decoupled in the case of talk, althouth they can be closed related in certain cases.

  17. 17. Doru says:

    I like the idea that “when you see a flower you actually become that flower seeing itself in your mind like in a mirror”. Sounds kind of esoteric, but for me is sufficiently explanatory in the qualia debate.
    So monkeys can have a blind spot because of brain miss function which is like seeing without qualia (zombie vision).

  18. 18. Paul Almond says:

    This reminds me of a (half-serious) thought experiment that occurred to me some time back. Here it is:

    We copy Professor John Searle’s brain structure into a computer. We then run the simulation, in a virtual reality or maybe installed in a robot. We ensure that the “mind uploaded” John Searle is not aware of its status: We allow it to think it is the original. We ask it if things like consciousness and qualia should be expected from computer programs. Of course, it will answer that it has no reason to think they should be. we ask it for a justification of this and it comes out with his usual arguments (Chinese room, etc.) We then reveal its status: We show it that it is a computer simulation of John Searle’s brain.

    Now, at the stage, what does the simulation do? Some possibilities:

    1. Claim that nothing has changed, that its previous arguments were valid and therefore that it probably lacks consciousness itself. (This would be hard to imagine.)

    2. Claim that it has now modified its position, in view of the fact that it is conscious and in a computer, and admit that its biological original was wrong.

    3. Claim that Searle’s arguments were sound, but its own realization that it is now conscious in a computer wins out against all that – although it would not expect the original Searle to believe it, and still agrees he has no rational basis for such belief.

    4. Emphatically deny any evidence for its status as a computer simulation – claiming that this is all a trick and it is really still the original, biological Searle.

    5. Claim that the original, biological Searle’s arguments were valid, that it is conscious, and that consciousness is clearly a physical property of its own particular physical nature, and that biological beings, including the original, biological Searle from which it was derived, are probably zombies. It should be noted that this is what we might expect of it if it still thinks that Searle’s arguments are valid, in view of its own claimed experience of consciousness. We would then have the unusual situation of a computer program claiming that Searle’s original arguments were valid, but were produced by a zombie pretending to be a conscious philosopher.

    One thing to note is that Professor Searle might have his own views on what the simulation would do – it would be based on his brain after all. I can imagine that he could easily say that it might claim that it is conscious, but would simply say that the simulation would be wrong in claiming this.

  19. 19. Peter says:

    Interesting. I think he might say something like this…

    I don’t accept the breezy premise of your thought-experiment. Copy my brain structure into a computer? I don’t know what you mean by that, but in any sense which involves a transfer of consciousness, it can’t be done.

    But let’s be sporting about it – suppose for the sake of argument it wasn’t impossible. Then the conscious entity you created here would say; yes, I’m fully conscious and have qualia, and it doesn’t trouble me in the least that my ‘brain’ is a computer. I’ve always accepted that brains are computers; lots of things are computers if you want to look at them that way. The point is that my consciousness does not arise from the computerhood of my brain. Somehow here you’ve included the property – which I’ve always taken to be biological, but maybe that’s not necessarily the case after all – which actually does give rise to consciousness. I still don’t know what it is, and you still don’t know what it is (even though you’ve somehow inadvertently managed to include it in your set-up here), so we’re left just where we were in that respect. One thing we do know, however, is that the running of some algorithm on a computer is not sufficient for consciousness…

  20. 20. Vicente says:

    Paul, as I see it, the point is that what you have presented is not even a half-serious thought experiment, it is just a science fiction short story. You have prejudged the result, and a thought experiment has to create a credible paradox build on loose ends of an established theory. I can write this one:

    We copy Professor John Searle’s brain structure into a computer. We then run the simulation, in a virtual reality or maybe installed in a robot. We ensure that the “mind uploaded” John Searle is not aware of its status: We allow it to think it is the original. We ask it if things like consciousness and qualia should be expected from computer programs. The whole thing turns out to be a big flop. We all get back home.

    It is as plausible as yours, not to say more.

    Think of something, you said “then run the simulation”, instead of “then start the copy”, why?

    Generally speaking, I don’t like at all the way in which thought experiments “gedanken experiments” are used in this field. For physics they are a real, legitimate and useful research tool, here they are just literature, in which we can all imagine and write anything.

  21. 21. Vicente says:

    Paul, please, what I have said is my personal opinion, and a general remark, which I would like to think is objective. I also believe that thinking these kind of stories is worth, quite entertaining, amusing and can lead to interesting ideas. I am just trying to express that in order make systematically real progress I don’t find them appropriate. Your comment is definitely interesting.

  22. 22. Kar Lee says:

    We like thought experiments! However, some thought experiment based on computer simulation has this underlying non-trivial assumption that is easy to overlook: Everything physical can be digitized and simulated. This may not be true though. In particular, I don’t know if you can simulate the wavefunction collapse of a pair of correlated singlet photons separated by the distance of a galaxy. How do you simulate a quantum collapse over such a distance? If the existential interpretation of quantum mechanics is correct, it is even worse because then there is no collapse, but a RAPID smooth decoherent process “propagates” across a galaxy faster than the speed of light. I don’t know if that can be done. So, one can only conclude that the assumption that everything physical can be simulated is not as general as one would like it to be. Some critical process in the brain may just be another example of things that you cannot simulate. It is therefore very possible that all the thought experiments relating to uploading of consciousness onto a computer is just not valid. I am extremely skeptical about this uploading concept.

  23. 23. Vicente says:

    Kar Lee, actually, what we should not overlook in computer simulation is that what the computer is actually running is a numerical solution of a “mathematical model” of a certain physical system, not the physical system itself. There you have: fidelity (modeling) and accuracy (numerical calculus) to look at.

    So when considering simulating the brain, what we are doing, is finding solutions of “a mathematical model of the brain, constraint to certain boundary conditions.

    So in logical terms, physical systems are never fully simulated, unless you the mathematical model you use is complete and has analytical solutions. The brain does not belong at all to these systems. In fact, I believe the only “real” system that can be fully simulated is the 2-bodies problem, and we cannot talk of simulation, it is really solving the problem.

    So, simulating certain areas and parts of the brain for neurophysiological studies is a great tool. Simulating the brain in terms of its global behaviour? please let me laugh until I loose conscience. First give me a solid physical-mathematical model of the brain and then we talk.

    I am aware systems can be modeled at different depths and complexity levels, and you could just use logical networks, and neuron firing statistich etc, and it is interesting, but that is not simulating the brain.

    So when Paul refers to Searle’s brain structure installed in the computer, first thing is to have a 100% accurate mathematical model of that brain, and that is impossible.

    I could accept running an approximate model, but then the whole logic of the tale fails.

  24. 24. Kar Lee says:

    Vicente, good point. Simulations always have accuracy issues. It can never parallel the exact physical thing it is simulating. For the brain, one mis-firing of one single neuron may lead the model to significantly deviate from the real thing after n time steps, even if we may ignore any quantum complication.

    Another point I like to re-visit is that Peter seems to infer that monkeys are not automata because monkeys can have blindsight. But if zombies can have blindsight just like non-zombies, and zombies are automata, the inference does not seem to be too watertight. But the thinking that zombies can have blindsight is troubling because blindsight is by definition sight without qualia. If zombies can have blindsight, and they, by inference, should have the non-blindsight which is the “normal” sight, the sight with qualia, how does this line of reasoning reconcile with the very fact that zombies have no qualia whatsoever? Qualia are as elusive as ever.

  25. 25. Vicente says:

    Kar Lee, yes what you pointed is equivalent to Elitzur complaint against Chalmers.

    I don’t know if this is a tricky of English language, but you have raised could be reason for which we have “Artificial VISION” systems, but not, “Artificial SIGHT” systems.

    Zombie system is equivalent to an artificial VISION system, it has physiological cammeras, and image processing algorithms, but no sight.

    Actually I believe zombies impossible to exist in agreement to Elitzur, even if qualia are not the cause of the talk they are the content in many cases, and if you don’t have them you can’t comment them.

    So, can we make some redefinitions and say:

    Sight = Vision + Qualia => Blindsight = Vision

  26. 26. Vicente says:

    My very personal opinion about this is that us humans, are zombies with an additional transponder that enables a communication channel with “something else” related and responsible for qualia.

    This transponder is “synchronized” in a GWT fashion and transmits information of the current scene at a certain rate X Hz.

    Additionally the brain has autonomous modules that can operate and make decisions using available information.

    If the system fails to create the “data package” or to transmit it correctly, the conscious experience is not created properly at the “other side”, still the autonomous modules can make use of it, there you have blindsight and similar effects.

    If I make an analogy it is like planetary space mission. The rover is the zombie, it has communication modules and an orbiter to transmit to the ground mission control centre. Our conscious experience is somehow played by the people in the mission centre. The rover can receive commands from the control centre, and also has modules (with artificial vision) that allow it to move and react to the environment.

    In the blindsight case I think we have a rover that cannot transmit images and info to the Earth, still it can use it artificial vision systems.

    Of course this is a conceptual approach, I haven’t got a clue what this transponder or the other side could be.

  27. 27. Paul Almond says:

    Vicente, I really don’t agree with the assertion that any copy would need to be a 100% accurate duplicate for a thought experiment like that: Anything capable of acting vaguely like Searle, enough to say the same kinds of things and give the same kinds of responses to his arguments, would be enough to possibly cause an awkward situation. We simulate lots of things without 100% accuracy all the time. Regarding the issue that it would fail to duplicate anything like that – of course that has to be accepted and I personally found it pointless to point out that very obvious possibility, although it is one that I do not really accept as plausible. You say that it as plausible as it working. Let’s examine that:

    If Searle’s brain is a system that can be formally described, then its behavior can at least be approximated by another system that can be formally described – unless we drag issues like those raised by people like Roger Penrose into this. Not even Searle claims otherwise. That would indicate that there is some process by which you could take Searle’s brain and make a copy. It does not mean any individual attempt would succeed. As an extreme example, we might imagine just writing a program randomly, which happens out of luck to approximate Searle’s behavior – and if Searle’s behavior is formally describable and Penrose is wrong, it is hard to see why that is not in principle possible – despite its impracticality. Now, for your “It doesn’t work” scenario which you say is just as plausible to really be just as plausible we would need brains to have something which is out of the reach of all this. They would need something that puts them beyond the reach of formal systems, or at least beyond the reach of formal systems that can be implemented on computers (as Penrose, implausibly in my view, claims). That should hardly be as plausible as the idea that a simulation of a system, of sufficient quality, acts pretty much like the system. In this respect, I really think what you said in viewing that outcome as just as plausible was flawed, resting as it does on assumptions about brains being fundamentally different to other physical systems, or requiring something important to be made out of the truly meaningless point that you can’t simulate systems with complete accuracy due to measuring limitations.

    Incidentally, if the fact that you can’t measure something with complete accuracy IS as profound as you seem to think, it even causes problems for Searle’s biological brain with regard to its own state changes. Each state of his brain is dependent on the previous state, which is dependent on the previous state and so on (unless we adopt some very strange view of how brains work). If a neuron is supposed to propagating information from another neuron, and cannot accurately measure what it is supposed to be propagating, and if the idea of 100% similarity between two copies is important, then this should cause issues: Nobody else can measure Searle’s brain with 100% accuracy, and neither can the individual pieces of Searle’s brain when it undergoes state changes. The copy may not accurately duplicate what Searle’s brain is supposed to be doing, due to measurement limits, but then again, if there is even anything that Searle’s brain is “supposed to be doing” in any meaningful way, neither can Searle’s brain itself in its state changes. If we want to make an issue about 100% accuracy, it should be just as rational to demand 100% measurement accuracy in the way a single system propagates its own state temporally. For this reason, I regard the issue of “It wouldn’t be a 100% copy” that some people raise as really, really shallow. Now, you could simply say that a thought experiment like this is simply “too far out” to be of any worth, and you may have a point there.

    To put this another way:

    Suppose Searle’s brain is in state S1 and time T1, and is in state S2 at time T2. We imperfectly copy the brain to make state C1 at time T1, which changes to state C2 and time T2. Someone says that state C2 is not a valid continuation of Searle’s brain, or somehow has less status than some putative 100% accurate copy, because it is not “100% accurate”, as it is based on C1 which is not 100% accurate either. However, state S2 was derived from state S1 with the same process of measurement in the same way – the same “lack of 100% accuracy” is effectively there, but just going on inside Searle’s brain during state propagation.

    Nevertheless, your opinion on what I said and its merit is duly noted, so thank you.

    I should maybe stop mentioning Searle’s brain. If he comes on here anytime it may seem strange to him.

  28. 28. Paul Almond says:

    As an extra comment here, unrelated to my previous one, and maybe a bit more on-topic: I would agree with the view that the issue of blindsight isn’t going to throw any light on the “hard problem of consciousness”, because any explanation we ever get for it will be an explanation in terms of physical processes in the brain, or something more or less equivalent (like an explanation in terms of some information processing model of the brain which more or less means it is an explanation in terms of the same physical processes) – and then making the leap from that to the “hard problem” – if there is such a thing – would be as difficult as it is with anything else we already know about the brain. It can possibly tell us something about brain functioning, and can possibly support or bring into doubt various information processing views of how minds work.

  29. 29. Vicente says:

    Paul, just a few reactions:

    “If Searle’s brain is a system that can be formally described, then its behavior can at least be approximated by another system that can be formally described”

    Formally as in a physical-mathematical model of the whole system?

    “then its behavior can at least be approximated by another system that can be formally described”

    approximated with what error?

    “Incidentally, if the fact that you can’t measure something with complete accuracy IS as profound as you seem to think, it even causes problems for Searle’s biological brain with regard to its own state changes.”


    “the same “lack of 100% accuracy” is effectively there, but just going on inside Searle’s brain during state propagation.”

    Not at all, systems don’t need to gauge themselves at all to operate. In addition, you are assuming that the brain is a states machine, which is something to be proven.

    “Nevertheless, your opinion on what I said and its merit is duly noted, so thank you”

    I think what you said is pretty interesting and a source of inspiration for all of us to go on fighting with “impossible to understand” reality. Look at my own personal crazy/stupid ideas in #26. The most sensible thing in the whole blog is what Kar Lee said in #24 “Qualia are as elusive as ever”. I know sometimes I am too blunt expressing my points, maybe that is also because blunt and direct answers to my “opinions” don’t bother me, I even prefer them. After all I am just a stick in the mud, a poor devil in the field, but… be aware of the R. Penrose’s points you referred to.

  30. 30. Lloyd Rice says:

    Vicente, re #26: Apologies for my brashness, but it’s a mystery to me how you can “have a personal opinion that … [such and such is the case], when you also “haven’t got a clue” how it could be. Sounds more like wishful thinking. Isn’t there some kind of a tendency to want things to be rooted in at least some sense of rational basis, if not observable evidence?

  31. 31. Vicente says:

    Lloyd, you are right and your brashness is wellcome. This reminds me one of the Edge questions, there is also a book I think: “Things we believe in and we cannot prove”. I admit there is a lot of wishful thinking but there is also some rational background.

    The baseline “to me” is:

    – By intropection I find much more acceptable that qualia and phenomenological experience are not part of the “known” physical Universe, therefore they are something else.

    – Experience makes me thing that both sides (Phys.Universe/Qualia) are related therefore there must be an interface.

    – My “model” (most probably not mine in a genuine manner of course) despite it does not provide any further understanding of qualia, it provides to me a small step forward to understand effects like blindsight and many others.

    – For the idea I presented there is as much observable evidence as for most others.

    – The existence of the zombie itself and how the link with the other side is created, etc, etc, are questions for which I also have some personal ideas.

    – If you give me one single compelling reason for which what I said is not possible I will inmediately discard it (my view, not your reason).

    I just wanted to share a personal view which I understand that for many people is just bullshit. It happens to be that those people usually don’t come up with any satisfactoy explanation either.

  32. 32. Lloyd Rice says:

    Vicente: OK. Fair enough. For me, because I found it extremely difficult to believe in an external “something” for which I could see no further evidence other than that fact the consciousness is hard to understand, I was very open to the explanations for consc. I found in Dennett, Baars, others, and most recently, Metzinger. I am now about halfway through “Ego Tunnel” and have ordered a copy of “Being No One”. For my tastes, Metzinger directly addresses your concerns that consciousness seems so clearly to be “not part of the physical universe”. So I guess it is hard for me to accept your position. Thanks for welcoming my brashness.

  33. 33. Lloyd Rice says:

    Vicente: I will admit, however, that the final step between having a clear sense of what the world/self map is all about and understanding how that could be the source of our sense of phenomenality is a big one. I would say that it kind of grew on me over a period of a few weeks. And that would never have happened at all except for the difficulty I have in accepting an external explanation.

  34. 34. Vicente says:

    Lloyd, I wish a dualistic approach of the kind I mentioned would be an external explanation. It is no explanation at all.

    For me it allows to understand some aspects of the mind-body problem a bit better than a pure physicalist formula.

    At the same time, conscious experience and subconscious processing can coexist and find their natural room in the model.

    Many psyquiatric disorders in my ABSOLUTELY NON PROFESSIONAL opinion can be understood as mulfunctions of the “rover/brain” that transmits dodgy data to the “observer?”, for example the voices heard by schizophrenic patients. Also conditions like blindsight are easier to model.

    Of course the main questions are what is the nature of phenomenal experience, and how does the interaction between both “worlds” work.

    Mind you, what is “external”, maybe there are elements and structures in the physical universe that we don’t know yet and could account for the mind. I don’t think it is the case, but who knows. So far, for me it is impossible to find a place for qualia in the physical Universe. Remember the discussion we had on the space coordinates of qualia.

    Another point is if the transponder is bidirectional, it sends data and receives commands. That it sends data is obvious, for the second case, I will appeal to visual experiments like the ones in which you can force your brain to choose between two instances of an ambiguous image. Typical examples are the (1 Cup / 2 Faces) (2 color arrows pattern pointing in opposite directions).

    I am very skeptical myself, I would just say that among all the options available (not many really), this is the one with which I feel more comfortable.

  35. 35. Lloyd Rice says:

    Vicente: I certianly agree that you have to go with what is comfortable. However, I would argue that the psychiatric issues you have raised can also be nicely explained by normal and abnormal configurations of the physical brain. For me, such explanations quickly outweigh any tendency to turn to non-physical explanations. And it means that I do not need an explanation for your “interaction”.

    And then we are left with the qualia question. That is the biggie. My argument is basically that the sensations we call qualia emerge when the world/body model (“map”) is fully functioning. In order to see why, it is necessary to dig into the operation of the map in great detail. I hope to be able to write much more about this in the near future.

  36. 36. Kar Lee says:

    Paul, Finally I see your point. You are trying to get a logical system to self-defeat. You are trying to get a computer running some brain process simulation (whether it is Searl’s or not is not important) to conclude that the underlying mechanism enabling itself to be conscious cannot make itself conscious. So, once revealed to it that it is such an unconscious system according to its own logic, it either has to change its logic/position, or there will be self-contradiction. The mind uploading thing is really just a side track, which snowballed in the main focus.

    To me, whether one buys into the “Chinese room argument” or not is not only logic, but the emotional attachment to the term as well. Different people have very different emotional reaction to the concept of consciousness. The fact that this term is so difficult to define, but somehow we all seem to know what we are talking about, but disagree on so many aspects of of what we are talking about, tells me that we may not be talking about the same thing. I even misunderstood you thought experiment and focused only on the mind uploading aspect of it.

    The mixing up of access and phenomenal consciousness is one example, and I suspect there are more types of confusions. To me, I can only understand the term consciousness in terms of my own phenomenology. It is as hard for me to imagine myself being a system of logic gates switching high and low as being a piece of rock. So, I have to dismiss a system of logic gates being conscious, just like I will to a piece of rock. If I cannot see consciousness in a piece of rock, it will be hard for me to see consciousness in two pieces of rocks rolling down together on a slope. Adding more rocks does not help. Increase the complexity of the slope so that the rocks can roll down in a more complex pattern does not help. So, at the end, all material system that is just movements of its constituent pieces in any complex pattern cannot get me to see consciousness in them, meaning it cannot get me to see the possibility of them giving rise to my own phenomenology. And of course, this is the hard problem. (I can see material components building up to smart, problem-solving automata, though)

    I also tend to agree with Vicente that there is something extra, because I already exhausted everything there is in the material world. I fully sympathize with his space mission robot example. My own favorite is still the characters in the Sims family don’t realize that they are just different roles play by the same CPU. And the CPU, I will take it as the Universal Mind, which is the qualia bearer. Some body call this “just kicking the can further down the street”. I happen to agree.

  37. 37. Vicente says:

    Lloyd then:

    1. If qualia emerge from the brain, they must share at least some of the brain substance. What is the substance of qualia?

    2. Qualia must emerge in some place of the space. Where?

    3. If you are adding two numbers, or thinking of quantum mechanics theorem, is that a sensation?

    4. What do the previous points have to do with the “map”?

    5. Qualia are directly connected to senses or imagination, you can be completely drunk, or dizzy or in a high mind state, any state in which your world/body model is completely messed up, or very weak, or with amnesia, or disoriented and qualia are there just the same. It is the understanding of the experience that is affected. I don’t see that fully functioning condition you need for qualia to emerge.

    5. Regarding the psychiatric disorders we agree, I meant that abnormal configurations of the physical brain cause that the “data send” is not correct or made up, so the “observer” unaware of the malfunction is confused and stressed, having a complete wrong perception of the situation. In that sense we are all a bit fooled. Blindsight, dualistic models are more suitable to explain it.

    The big step you mentioned, is much more than that, is a huge gap filled with void.

  38. 38. Lloyd Rice says:

    Vicente: What is the substance of a picture? Not the paint or the canvas, etc, but the image itself?

    Let me go back to the self-driving car we discussed in the “Resurgent AI” page of this blog. Think of a map of the terrain. It is a very complete map including, for example, bumps or rocks in the road that might be of concern to the path of the vehicle. Now suppose the “map” (in a larger sense) includes software to explore the details of those rocks, their exact positions, sizes, compositions, etc, anything that might be of concern. This “exploratory” software essentially builds a detailed map of the rock in some format that can be used by the car’s goal seeking software. Now, further imagine that the car’s “rock evaluation module” generates a beeping sound that indicates the level of danger posed by the rock. Actually, it need not be an actual sound. It might well be just some computer code that stores a certain code value somewhere. But that value gets picked up by a status evaluator and may completely change the planned course. As things got more and more complex and the vehicle had to interpret the “meaning” of the rock (ie., how it would affect the vehicle’s future course of action), it is my claim that this “code” to deal with the rock would represent the first stage in what could become a conscious sense of the rock. The car would “see” the rock. What it sees is all of the information that has been compiled about the rock.

    The qualia do not emerge from somewhere in space in the sense that you are saying. They emerge from the organization of the information that is represented.

  39. 39. Lloyd Rice says:

    Qualia are information. Consciousness is what we “see” of the rock. All we know about it. What it sounds like if we hit it. What it feels like if we step on it. And what it will do to our schedule if it gets in the way.

  40. 40. Vicente says:

    Well Lloyd here we are in square one. I shouldn’t have mentioned the issue of substance which is already controversial enough for physics and metaphysics.

    I am surprised that you didn’t use the cartesian theatre/ homunculus infinite regression problems related to dualistic models, which for me poses one of the strongest drawbacks.

    Regarding you AI equipped car… what you call sensing I think is detecting, like vision is not sight, etc

    Finally, if qualia are part of the physical universe they have to be somewhere in space. Even if an ants colony behaviour is an emerging property that goes beyond the addition of the behaviours of each indivual ant, the ants colony is somewhere, and the behaviour can be described in physical terms, and has physical effects. You could say that the behaviour itself is an abstract concept, but then it does not belong to the physical world. You don’t accept that qualia/mind could have physical effects either.

    Qualia are information? it depends on how you define information, qualia among other things in certain cases can contain information. If you imagine the look of an atom with a 10^90000000000000000000 protons nucleus (theoretically discarded), what information is there?

    What is the information in sadness? emotions are usually “the result” of certain past and present information or future expectation.

  41. 41. Lloyd Rice says:

    Mark this space. Conscious Entities blog page “Monkey See?”, comment # 41. Here it is:

    Consciousness is information sufficiently organized that it can look at itself.

  42. 42. Lloyd Rice says:

    Dennett, Baars, Hofstadter, Metzinger, yes, even Searle and Chalmers. They’ve all been saying it. I just put it in those words.

  43. 43. Lloyd Rice says:

    I think Damasio was one of the first to clear up some of the crucial issues and to get the ball rolling out of the dualistic stagnation. It will take a lot more discussion to sort out all of the details. But we’re off and running.

  44. 44. Lloyd Rice says:

    Vicente: Everything you say in #40. Good points. Substance. Infinite regression. Vision is not sight. Behavior is an abstract concept. What is the information in sadness?

    All well put. Good things to ponder.

  45. 45. Lloyd Rice says:

    Paul and Kar Lee: Interesting trains of thought in #22, #27, #36, … A pile of rocks rolling down the hill certainly doesn’t do it. But a system of logic gates is another matter. The difference is their potential to organize information. Searle said a lot of confusing things about simulation. For me, what it comes down to is this ability to organize information. Is it there or not? And is it organized in such a way that it can affect physical matter? (And of course, to be able to look at itself.)

    I agree that Kurzweil’s “uploading” business is nonsense (re #22), but not because of any limits of simulation. It’s simply that a particular assemblage of information properly organized in a particular body/self includes a very large number of dependencies upon that particular body/self. Those dependencies would not be applicable to another body/self.

    Blindsight is an important diagnostic tool (re #14). But Chalmers’ philosophical zombies are nonsense. If the information is properly organized, you get consciousness. If not, you don’t. You can’t have one without the other.

  46. 46. Vicente says:

    Lloyd, lloyd, this recalls me Californication by the Red Hot Chilly Peppers and the lyrics go…teenage with a baby inside getting high on information…

    are you getting high on information?

    I have an idea that I mentioned in another blog, don’t know which one. I believe there is a conservation law for information, that is the one that keeps you safe when you pass away…

  47. 47. Lloyd Rice says:

    Vicente: Thanks for the kind words. I do agree that I will be long dead before this is all worked out.

  48. 48. Vicente says:

    “Consciousness is information sufficiently organized that it can look at itself”

    It is a beautiful statement, and I believe it holds a profound meaning. I am sure that this view is part of the TRUTH.

    If you think that consciousness is also responsible for organizing the information, it seems you have closed an important loop.

  49. 49. Lloyd Rice says:

    Vicente: No. Actually I think that consciousness is created by the structure of the information. The consc. did not do the organizing. Something else did that, either a software designer or evolution.

    Of course, there is much more to be said. For example, in this material universe, information cannot stand alone. It requires a material base, a substrate. Compare that to the original (Cartesian) view, in which consc. was seen to be non-material stuff. The conclusion was that it must be some other kind of stuff, off in its own universe. But today, only a handful of radicals would claim that information is not of this material universe.

    Also, much more needs to be said about the role of the body in this material support, because that is a two-way relationship. Besides supporting the information structure, the body also contributes significantly to that structure.

  50. 50. Lloyd Rice says:

    Re #49: To say that first sentence another way, I would say that consciousness arises when the information structure is active.

  51. 51. Vicente says:

    Lloyd: So for the blindsight case, visual information organisation index is below the threshold value that triggers sight, but still organised enough to be used by the brain if required. Is it like that?

  52. 52. Lloyd Rice says:

    Vicente: I suspect it’s a bit more complicated than that. The “organization index” (if that concept makes sense) would apply to the entire organism. It would signal the presence of basic required mechanisms that would underlie the entire mental structure. I suspect that blindsight, on the other hand, is due to specific failures in local structures. These might not actually be pathological. For instance, the path taken by the optic nerves goes by way of the thalamus before reaching the visual cortex and it seems that the thalamus much earlier had the role of much of the visual function. This organization leads to many possibilities for perceptual activity which make use of only a portion of the overall visual processing structure. This means there can be a mismatch between the incoming visual information and the portion of the visual information used to activate the workspace.

  53. 53. Vicente says:

    Lloyd: Then, what would happen to somebody whose senses are all disabled and its motor paths cut. The guy would just rely on memory and imagination, organism completely out of order. This is a thought experiment.

  54. 54. Lloyd Rice says:

    Vicente: The body pathways are needed to build up the brain controller, not necessarily needed to maintain it. The example case would be patients who have, for various reasons, entered into the condition known as a vegetative state. Some such patients have recovered contact with the world after lengths of time up to 20 years. It is hard to know what state the brain was in during the time of the vegetative state, but some were able to converse nearly normally upon recovery. Some such cases have left us with multiple CAT and MRI scans taken during the vegetative state.

  55. 55. Kar Lee says:

    Lloyd and Vicente: From reading your back and forth discussions above, I suspect that you have a very different understanding/definition of the term “consciousness”. Maybe if you can try to define what you mean by consciousness (or how you use this term, first person, third person, more functional, more phenomenological, etc), the discussion will be even more illuminating. My theory is that people falling into different camps based on the nature of their emotional attachment to this concept.

  56. 56. Lloyd Rice says:

    Kar Lee: Sorry, but I thought the concept was pretty clear. I would characterize consciousness as the first-person experience of things; the color of red, the feel of a textured surface, the biting sharpness of pain, etc. etc.

    I must assume that you think I am talking about something else when I say that a computational system could experience these things. Indeed, I seem to recall you saying that, to you, a collection of logic gates could do no more that a pile of rocks falling downhill. I emphatically do not believe that would be the case.

  57. 57. Vicente says:

    Kar Lee: After reading #56 I think I share most of Lloyd’s understanding of the concept of consciousness. For me consciousness is the existence of an inner mental life, sometimes referred to as phenomenological experience. Personally, I divide it in two parts, the one that Lloyd referred to, which is an scenario build upon qualia (this is really consciousness), and secondary part which is the understanding or interprtation that the “observer” has of such inner scenario, I call that awareness. Both are closely related almost overlapping. Consciousness is boolean, you have it or you don’t. Awareness can have different degrees and depth.

    Regarding your second point, and referring to a discussion in the AI blog, I would say that an AI system that selects certain information in digital format, is not more than an sieve that sifts cereal grain, in what to consciousness concerns.

  58. 58. Vicente says:

    Kar Lee, I am aware that having an observer that watches an scenario poses several logical problems that I really don’t know how to manage. For me, the only way to get out of that is to assume that somehow everything is “mind substance” (tbd) so matter, mind,… is one single it. So the spectator and the screen and the image and the projector…are all one single thing… I don’t get this one either.

  59. 59. Vicente says:

    Kar Lee, another idea I have is that maybe our knowledge of the Universe is still too poor, and there is no need to appeal to non-physical entities.

    Plato’s allegory of the cave made me think that we could be multidimensional beings, temporaly (lifetime) constraint to this 4-dimensional Universe. The rest of the dimensions above 4 would be responsible for our conscious experience. But we just perceive the projection of the Universe in the 4D hyperplane.

    Most physicists just sarcastically grin when the hear this… and I am afraid they are right.

  60. 60. Lloyd Rice says:

    Re #57: just a minor detail: I would not consider the awareness aspect as technically a part of consciousness. I would say that you are conscious of the perceived aspects of the object. You are “aware” that the object is there. This is a whole interesting discussion of the different aspects of attention.

    Re #58: Here is an interesting issue. For me, it’s a question of the goals and stance of the entity (the organism, machine, etc.) with respect to the perceived object. That way, you don’t have to deal with the observer as part of the consciousness. At the same time, the observer can be conscious of itself.

  61. 61. Lloyd Rice says:

    Sorry. In #60 I should have said I would not consider the interpretation as technically a part of consciousness.

  62. 62. Kar Lee says:

    Lloyd, Interesting. But I think further clarification is in order. When you said, “I would characterize consciousness as the first-person experience of things”, did you mean you can imagine yourself being that thing and have the first-person experience from the perspective of that thing? Because if you stand outside of a “conscious” thing, the first-person experience of that thing is ill-defined. For the sake of clarification, take for example, if someone said “This book on my desk has first-person experiences inside”, is it wrong? How do you go about showing that being wrong?

    Vicente, since you agree with Lloyd on this first point, how will you respond to the statement that “This book on my desk has first-person experiences inside”, and how will you justify your response?

    I believe the heart of the disagreements is deeply related to how one understands the phase “having first-person experience”, and the emotional response this phrase evokes.

  63. 63. Lloyd Rice says:

    Kar Lee: No. Sorry. In your example, I would not say the book was having a first-person experience. I meant it in the sense that Ken Wilber talks about first-person experience, that “I” experience the universe as “I” look out upon it.

    I would say that the book cannot have a “first-person” experience because it does not have the computational structures to support consciousness. That it contains information is not enough. In my “one-liner”, I required that the information be “sufficiently organized”. By that, I mean that certain computational structures are available. That the chapters be cleverly laid out is not enough.

    To clarify some definitions, may I mention two cases?

    1. A Necker cube flips: I was aware of state A. I am aware of state B. I was not conscious of the flip. I could say that I was aware that a flip occurred and most would probably think it OK to say I was “aware of” the flip.

    2. A spot blinks off and a short time later another spot appears a short distance away (Dennett, Libet, others): If the timing is right, I am conscious of movement from point A to point B. I am (rationally, abstractly) aware that there was no movement.

  64. 64. Vicente says:

    Kar Lee, I will respond saying that that statement is absolutely false.

    Actually the book is just a piece of matter itself. The book, ie. the piece of matter + attached concept/label only exists in your mind.

    Essentially there is no book on the table, and there is no table either (is just a wooden shape, oh there is no wood just molecules, oh molecules are a concept too…and so on). You construct your world, superimposing: ideas, concepts, names, labels, etc to the physical world around you. The case of the book belongs to a special class since it is in fact invented and made by humans.

    You could say that maybe the book contains information (for a reader) about the first-person experiences of a writer.

    There is no book if there is no reader. The existence of the book is subject to the existence of a reader or potential reader (if there is no current reader but could be a future one I admit the existence of the book).

    To summarise, there is no Universe if there is no observer/experiencer.

  65. 65. Vicente says:

    Lloyd, what do you think of the fact of being able to chose to be aware of the state A or state B of the Necker cube, and flip when you want.

  66. 66. Vicente says:

    Bertrand Russell said that the activities that make man the happier are those that get him closer to earth (ground?) (I can’t recall the precise quote, too lazy to look for it, but that was the idea).

    I think he was right, and I believe that it is because when we look to a skyscraper building, the conceptual burden and effort our brain has to take is much harder than the one it does when looking to nature: mountains, forests, rivers, clouds…that naturally “fit in”. The same applies to other activities like: gardening vs. reading financial times.

    I know this idea is very vague, but I think it has an important relation with the way we construct our reality. It is like flowing with situation or forcing it.

  67. 67. Lloyd Rice says:

    Vicente: I can’t get too much into the free will thing. Basically, I think I do not believe in what is usually called free will. I think that my “decisions” consist of a healthy dose of randomness applied to all of the existing structure that has resulted from all of my past birth, growth and experiences. I know that I “can make” the Necker cube flip and also that if I just watch it, it flips “all by itself”. There’s that randomness.

    You could be right about minimizing the energy required to look upon the world, but that would only be if all else is equal. I think it would take only a very small effort to override any such tendency.

    All this is a bit too poetic for my tastes.

  68. 68. Vicente says:

    Lloyd, I was not thinking of the free will issue in this case (although it is also relevant). I was more interested in the possibility of an external agent acting on the brain.

    THANK YOU LLOYD !! you pointed out that it seems to be a RANDOM PROCESS behind, so maybe the agent is acting on the brain Random Even Generator that controls the flipping. The agent causes a bias and you flip, is the idea in the Princeton noosphere project, also referred by Elitzur in his paper.

    I am trying to convince some friends in the biophysics department to start a project to search for physiological processes in the brain that could have some randomness, and could be candidates to receive the action of the external agent.

    Poetry, can you imagine an AI system composing a real poem, the kind that really touches you, not a set of words placed according to some rhyme rules.

    Sorry this is drifting away from blindsight too much.

  69. 69. Lloyd Rice says:

    I don’t think there’s anything magic about poetry. I takes a fair knowledge of how to use language, a good knowledge of human psychology, and a good dose of randomness. The people I have known who most seemed to enjoy poetry tended to get some sort of pleasure, which I never understood, from grasping at straws of meaning and significance buried in piles of word salad. Kind of like that last sentence.

    If it took blindsight to compose it, maybe it would have been better poetry.

  70. 70. Lloyd Rice says:

    As I recall, some computer-generated “poetry” passed the poets’ version of the Turing test several decades ago.

  71. 71. Lloyd Rice says:

    Peter: I was wondering if we might continue this discussion (of the last 2 or 3 cmmts) under another topic more appropriate to creativity. I found laughter, grief, and a couple of other emotions, but nothing on creativity. Not criticising. Your repertoire is amazing. Just wondering if I missed something.

    [It’s a thought, Lloyd: I haven’t come across anything that seems suitable as a basis for discussion, but if you know of an interesting paper let me know – or write a guest piece yourself ?? – Peter]

  72. 72. Kar Lee says:

    I just wondered if three of us just hijacked this blog and turned it into a three-man show.

    Lloyd, from your clarification, I infer that your definition of something having consciousness is that if you can convince yourself, somehow (the details of the “how” is not important for the time being), that you can be the “I” in that object. Is that correct? Because I want to get to the bottom of the meaning of ‘”I” experience the universe’. Are you saying that if you are able to project yourself (by some reason convincing to you) into that object and imagine what it is like to be that object, then that object has consciousness, according to you? If this is not what you meant, then the statement that “there is an “I” in the object to look out upon the universe” is equivalent to “there is a first person experience in the object”, and so it needs further clarification.

    Vicente, I see your point. It has a lot of similarities with the Buddhist’s philosophy. However, in my question, I was not referring to the information contained in the book. I could have said the cup on my desk having a first person experience. Sorry about the confusion. So, your definition of consciousness is this extra something, perhaps call it the observer, that is like the one inside you, right? So, you also have to be able to identify with an object, imagine yourself being that “fellow observer” for you to call that object a conscious object, correct?

    Looks like we are converging….

  73. 73. Lloyd Rice says:

    Kar Lee: You’re reading far too much into my words. Nothing fancy about contemplating the universe from some mysterious perch. No, I was just saying that if a computational mechanism (such as my brain) includes certain necessary components, then that mechanism will have the ability to perceive the world, and itself in the world.

    I do understand, however, that you might view a computational mechanism other than a brain as something less than a perch capable of perceiving the world. In a way, that is my point. I believe that there is nothing special about my brain other than that it includes said necessary computational mechanisms.

  74. 74. Kar Lee says:

    Lloyd, Maybe I did not quite make it clear enough in my question. Let me try again. We are talking about what we mean by consciousness so that we don’t talk passed each other. We are not talking about how to convince ourselves something is conscious. The first part is just to set up criteria and definition for what we mean by consciousness, then the second part (the how part) is to show how something satisfy our criteria and definition. So, the discussion about computational structure belongs to the second part, which for now, we are not dealing with yet. My request for clarification is regarding your statement about the “first person experience” as ““I” experience the universe as “I” look out upon it.” Is this “I” referring to you? I suspect it is not. If not, since the concept “I” is completely ill-defined for a third person without being the person yourself (since you cannot claim that there is an “I” in any body other than your own body, doing so will only make the statement equivalent to saying there is a “consciousness” in there, which just turn the statement into an assertion rather than a definition that we originally intended), so what is this “I” referring to? So more clarification is required if this “I” is not referring to you. See my point?

  75. 75. Lloyd Rice says:

    Kar Lee: OK. I will try this. I will alternately speak of entities other than myself and I will then also say how I myself experience the things I am talking about. I hope this is not needlessly wasting a lot of words.

    First, is my belief that any mechanism which meets the information and computational requirements will experience consciousness. From my point of view, looking at that entity, I cannot directly confirm that the entity experiences consciousness. But I will say that I do not believe that Chalmers’ philosophical zombies can possibly exist in the real world. Therefore, I must conclude that if the entity tells me it experiences consciousness, then I must believe what it is saying. Thus, I believe that the entity experiences qualia related to (resulting from) its perceptions, including color, pain, sound, tactile response, etc. etc.

    It is of interest here that there exist (in my imagination, at least) a whole range of behaviors related to blindsight and several species of zombies, but let me continue with the original point.

    I am also such an entity – an entity that experiences qualia as I perceive the world around me. And in saying this, I assure you that I am indeed speaking of this self in a body that is known to various friends as “Lloyd Rice”. That body/brain, taken together with the computational result of the brain in operation, is what I call “me”. Is that clear enough? Am I missing the point?

    I think much of this is verging on meaningless word salad. Is any of it meaningful to you?

  76. 76. Lloyd Rice says:

    OK. I guess I missed the point again. You didn’t want to hear how I decided the entity was conscious, nor were you yet ready to talk about mechanisms. So what is the question? Are you concerned about the relationship between qualia and attention? All this about what my “I” refers to is exactly what I have been trying to say. What am I missing?

    Maybe you want to talk about the illusionary aspects of qualia, the fact that consciousness is not all it seems to be. For example, the world appears to me as a continuous, seamless flow of objects in space and time. But I know that at least some aspects of that are illusionary. It is an interesting question just how the consciousness mechanism goes about constructing a seamless appearance based on the fragmentary realities of perception.

  77. 77. Vicente says:

    Lloyd: “It is an interesting question just how the consciousness mechanism goes about constructing a seamless appearance based on the fragmentary realities of perception”

    It is extremely interesting! a core issue in this field. Probably the cause of blindsight is a malfunction of such mechanism.

    “is my belief that any mechanism which meets the information and computational requirements will experience consciousness”

    Unless you specify the requirements and you test the theory, this statement is tautological, you are saying: the mechanism will experience consciousness if it meets the requirements to experience consciousness.

    Kar Lee: “So, your definition of consciousness is this extra something, perhaps call it the observer, that is like the one inside you, right?”

    Well it is both, the observer and the observed object (main content) and the understanding, the whole experience. I remember a discussion on the couple container-content regarding consciousness. A very difficult issue in the logics and phylosophical arena.

  78. 78. Lloyd Rice says:

    Vicente: I do not find it so mysterious that the mechanism would be able to construct a seamless representation. You could perhaps compare the process to curve smoothing. There are many known ways to construct a smooth curve given partial or jagged (noisy) data. It would then logically follow that the smoothed result be used in further processing, such as the binding in the GW model.

    It would seem to me that blindsight is not this, but a blockage somewhere later, such as in the binding process.

    I agree that my claim about the computational requirements is incomplete until I can specify those requirements. I do not believe this makes it tautological, just incomplete.

  79. 79. Vicente says:

    Well, in curve smoothing the interpolated points that build the curve are just a “best guess”.

    So based on a simple scaffolding or skeleton, the brain adds the flesh and the skin and combs the hair, I find it astonishing. So:

    – Why would evolution do that, if there is no extra real information. Just to make the picture nicer to you?

    – What if all the added elements are significantly innacurate, which is usually the case, according to perception and memory experiments.

  80. 80. Lloyd Rice says:

    Vicente: As an extremely simplified initial version of the requirements, I have stated that it be able “to look at itself”. This will clearly need to be expanded to include senses other than vision. And it will need to take account of the many aspects of the self that are represented in the data structure, to be able to evaluate characteristics, such as the emotional state, whether the body is hungry, tired, etc., and to be able to act on this evaluation by initiating appropriate motor activity.

  81. 81. Lloyd Rice says:

    Vicente: The “scaffolding or skeleton” actually consists of fragments such as the intermittent visual inputs available at each saccade of the eyes. This is what needs to be smoothed. Of course it is a best guess. That is just what’s needed. The picture is not nicer “for me”. It has been filled out so the motor activations and other such outputs can be computed smoothly.

  82. 82. Vicente says:

    Lloyd: “Consciousness is information sufficiently organized that it can look at itself…”

    and may I finish the statement: “…and don’t understand at all what is seeing”.

    This is big tragedy, that we cannot answer the most important of all questions, what am I?

  83. 83. Lloyd Rice says:

    Vicente: Well, I think my statement is the answer to the question. To rephrase: “I” am the combination of the organized information and the body that information is in. Some might say that “I” am only the organized information, excluding the body, but surely that cannot be correct. So “I” am both, all as according to para 4 of #75.

  84. 84. Kar Lee says:

    Lloyd, I think I understand your definition now. You did miss the point in #75, as you said. But your answer there reveals to me your definition: A completely third-person definition “When it meets the information and computational requirement”. Note that it has nothing to do with qualia, it has nothing to do with your being conscious yourself.

    On the other hand, Vicente’s definition focuses on qualia, and all those mental quality things, the observer within and the observed without. When we smash these two definitions against each other, we can understand why we are not talking about the same thing.

    Let me offer my definition:
    An entity is conscious if I can imagine myself being that entity and feel what it is like to be that entity (I could be that thing). (obviously stolen from the “what it is like” camp)

    I challenge anyone reading this to come up with another definition to address the “criteria” part. Definitions like “Something is conscious if it is blue” does not count (you can replace “blue” by something of the same sort like “computerized”, “driven by software”, “computationally sophisticated”, “informationally integrated”), because if you do that, you will have to explain why. But a true definition does not require further explanation. Example: Sublimation is defined as solid turning into gas phase directly. No further explanation is required, because it is by-definition.

    I also like to argue that “consciousness” is not a well-defined concept in a third person view. It is not a third-person observable thing. That is why we have so much difficulty defining it.

  85. 85. Lloyd Rice says:

    Kar Lee: OK. Let’s see if I can add anything useful to this. I agree that my definition is stated from a 3rd pers point of view. As I considered different versions of the wording, I did so by trying to place myself in the role of the information structure. In that case, what would I need to have happen in order to become conscious, given that I am trying to “take on”, “assume” the characteristics and properties of the information entity. Obviously, that is difficult to do, not knowing just exactly when I have succeeded. But that is what I tried to do. I then would try to think through what would happen as the computations were carried out. I believe I can do this with some degree of accuracy based on a lifetime of writing software.

    The issue is this: Suppose the entity is “viewing” a scene, say the video screen in front of me now. A typical scene-analysis software component finds the edges, the features, etc, and succeeds in framing the outline of the screen and identifies the object as an Acer 19″ monitor. It then proceeds in a similar fashion to decode the pixel info. At this point, let me skip ahead several subroutines and assume that character recognition, etc. have been applied and that an abstract data representation of the entire tableau, monitor, lamp, kbd, etc. has been constructed. At the point where GW theory would invoke “binding”, this tableau gets connected to various memory contents: what I was typing, when I logged onto ConscEntities.com, what are my plans, and so on. A number of other more-or-less irrelevant items are also included in the binding, the fact that my cup of tea is getting cold, etc. And all of this information also gets connected to certain motor systems which carry out the actions of continuing to strike keys as commanded.

    So the question is this: what do “I” (the imagined “I” of the info structure) sense? What do I see?, feel?, think? Everything that is relevant has been made immediately available via the binding and in the form of easily accessible data structures.

    I then make the so-called “big” leap. Why should “I” not be CONSCIOUS of all of these things that I am processing?

    I can find no good answer that would deny my ability to sense the data I am processing. The best answer I can say is that what we normally would call “being conscious of the screen, keyboard, tea, etc.” is really no more than processing the data in the structures that have been presented.

  86. 86. Lloyd Rice says:

    I agree that there are several more issues to be considered. There are some special cases not properly represented in the above scenario. But let us begin with this.

  87. 87. Lloyd Rice says:

    And I have now finished the cup of tea.

  88. 88. Kar Lee says:

    Lloyd, you confirm my conjecture in one case. You are comfortable imagining yourself being a computational system, biological or electronic. That is why you have no problem seeing consciousness in a computer.

    People who are hard core materialist are people who can comfortably imagining themselves being a group of pure material. With this ability, the hard problem goes away.

    People who believe in the existence of the hard problem are not true materialists, despite many will insist they are. In the back of these people’s mind, they have doubts like “why am I associated with these brain processes” because the comfort level is not absolute. The slight discomforts lead this group of people to seek further explanation to bridge the “gap”, or something extra.

    For those who cannot comfortably imagine being a material system only, and think that there has to be something else, these are the dualists.

    Those who are extremely uncomfortable with imagining themselves being material, to the point that they doubt the reality of the material world, are the internally focusing idealists.

    Dualists often cannot understand why Dennett does not recognize the hard problem because their fail to understand Dennett is EXTREMELY comfortable with imagining himself being a group of pure material. The emotional pull to seek further explanation is not there.

    Philosophers fall into different camps according to their emotional responses to the word “Consciousness”, and their targets of explanation are completely different: Talking passed each other ensured.

  89. 89. Lloyd Rice says:

    Kar Lee: An excellent summary of the basis for various points of view.

    However, it seems to miss one factor, which I admit may be a by-product of my staunch materialist position. The problem is in the reconciliation between a given person’s point of view and what seem to be common, verifiable scientific results. If I find that some inner belief, desire, inclination conflicts with what I can observe (or at least read about) based on my educational background, then I must reject the inner belief in favor of the scientific observations. Of course, there are grey areas. But that is my approach to sanity of belief.

    It appears to me that many people do not share my willingness to discard the inner tendencies.

  90. 90. Vicente says:

    Kar Lee, link of your cmmt #88 with the introspection blog and you will find that it is not altogether consistent.

    The only way Dennett has to feel extremely comfortable with being just matter is by denying the existence of qualia, in which case nothing more is to be said. (having said this what is matter itself is quite an issue)

    Lloyd, since you invoke scientific observations I will also refer again to all it was said in the introspection blog. Regarding #85, since you mentioned the “big leap” that I call “void gap”, I take it.

    Peter: in the last few days a lot of weird text links happen to appear quite often at the bottom of the page, is it normal? or is your place being hacked? I don’t think it is me who is being hacked since it only happens when I log in your site, either from the office or home.

  91. 91. Vicente says:

    Peter: I am trying to paste this text in the comments textbox for you to check it, but the system does not allow me to submit it, I guess that you have protection mechanisms to prevent spam.

  92. 92. Peter says:

    Vicente, yes – two attempts by you to post the unwanted stuff were picked up by the spam filter. No need to post it, since I can see it too, and I will try to sort it out. If anyone out there has any clues, please let me know by email on wordsalad [and then the at thing and then] @consciousentities.com

  93. 93. Lloyd Rice says:

    Vicente: Do you have a link to the “introspection” blog?

  94. 95. Lloyd Rice says:

    Vicente: OK. I assume you are referring to the “Introspection” page of this blog, the page just before “AI Resurgent”. I am now reviewing that material.

    Ker Lee: As for your comments and Vicente’s objections, my point in bringing up scientific evidence was that, as stated, your observations are not committed to a particular philosophy. What I would like to try to do is to reach a personal philosophy that is consistent with neuroscientific knowledge, understanding of course that that is incomplete and always will be.

    But my goal is always to find consistent areas between introspection, neuroscience, and my knowledge of software and hardware.

  95. 96. Lloyd Rice says:

    Vicente: Yes, indeed. Thank you.

  96. 97. Vicente says:

    Kar Lee: I don’t feel comfortable with any approach and I think I don’t have any emotional pull, except for the vertigo that any human being could have when confronted to a nihilist end of its existence, that fact could make me favor those approaches that make possible some kind of way forward, despite I admit it might not exist.

    Having said this, I believe that anybody exposed to the issue of consciousness that doesn’t feel a profound bafflement, it is either that: it has not understood the problem at all, or it is a Tibetan lama that has reached enlightment or any equivalent mind state beyond my understanding.

  97. 98. Lloyd Rice says:

    I find that appreciating that I am a computer is a mild sensation compared to the realization that there is no god.

  98. 99. Kar Lee says:

    I think we are probably going to break the threshold (perhaps the record, Peter probably know right away) of >100 comments….

    Vicente, I think your emotional pull is towards the qualia side, which Dennett deny.

    Lloyd, the term “scientific” is also not a well-defined concept. If you look at Paul Almond’s discussion on supernatural http://www.paul-almond.com/Supernatural.htm, you can replace the word “supernatural” with the word “scientific” and the same arguments apply. There are only things that are logical, or illogical, or can’t be determined. There should never be a concept of scientific because science is an ever expanding enterprise with no clear boundary. To make the matter worse in consciousness discussion is the target of explanation is different for different people. Here is a quote from Chalmers to illustrate my point:

    “The ambiguity of the term “consciousness” is often exploited by both philosophers and scientists writing on the subject. It is common to see a paper on consciousness begin with an invocation of the mystery of consciousness, noting the strange intangibility and ineffability of subjectivity, and worrying that so far we have no theory of the phenomenon. Here, the topic is clearly the hard problem—the problem of experience. In the second half of the paper, the tone becomes more optimistic, and the author’s own theory of consciousness is outlined. Upon examination, this theory turns out to be a theory of one of the more straightforward phenomena—of reportability, of introspective access, or whatever. At the close, the author declares that consciousness has turned out to be tractable after all, but the reader is left feeling like the victim of a bait-and-switch. The hard problem remains untouched.”

    For Dennett, a master of third person view, consciousness has been explained, it is just the details that need to be worked out.

  99. 100. Lloyd Rice says:

    The ideal scientist is open to changing his beliefs.
    The ideal supernaturalist is fixed in his beliefs.

  100. 101. Vicente says:

    Kar Lee: it is not an emotional pull, I EXPERIENCE QUALIA.

    Lloyd: The ideal scientist has no believes as a scientist. He knows or he ignores something, being aware of what TO KNOW means in science. Just a pure Popperian system.

  101. 102. Lloyd Rice says:

    Vicente: Maybe we differ on our ideals, and I never read anything by Popper, but I would argue that every functioning mind operates by a system of beliefs. I believe a large collection of things about how the world works. Without those, I could not act. But the question is, what do I do when I encounter something that contradicts part of my internal belief structure. My actions then serve to define whether I am a scientist, a religious fanatic, a supernaturalist, or something else.

  102. 103. Lloyd Rice says:

    To me and my philosophy of mind, the belief system is not unlike the Cyc system, a large database of interrelated “facts”, any of which may be true, false, questionable, etc.
    Like the Cyc game, I must continually “check my facts”. These “facts” are abstract, rationally-based, items of information, as opposed to things I KNOW based on my body/emotion structure, such as “I am hungry”.

  103. 104. Lloyd Rice says:

    In the last sentence of #102, I should have said that “My actions then serve to define what I aspire to be, a scientist, …, etc.” That is the difference between the ideal scientist and the real-world human scientist.

  104. 105. Vicente says:

    Lloyd, I will refer again to the introspection page. And will clarify Kar Lee that “scientific” is a perfectly defined concept according a well defined scientific method. A different issue is the scope of science and what things can be scientifically approached.

    Karl Popper is a philosopher that established what requirements a theory must fullfil in order to be said scientific. Basically, it is based on the concept of falsability,i.e. A theory is scientific if it can be proven wrong. This idea is currently accepted by almost the totality of the scientific community. I really encourage you to read K. Popper, who also has made important contributions in the field of consciousness. He is one of the most important philosophers in the XXth century.

    Kar Lee, it is precisely because qualia cannot be directly observed, measured and analysed in different laboratories, and results compared, that they pose a big problem for science to take them as a research object. I refer again to the introspection blog, and to why behaviourist school happened to appear.

    Of course you could appeal to Gödel and Russell, and say there is no axiomatic system to rely on, etc etc, or go epiphenomenal… but all that has nothing to do with the objectivity that leads to a absence of believes in the scientific method.

    I think we have got too far from blindsight.

  105. 106. Vicente says:

    Lloyd, of course as you said, I am talking of IDEAL scientists in IDEAL conditions… one of the criticisms that Popper ideas usually have is that scientific progress is usually carried out by non ideal scientists is non ideal conditions (real life).

  106. 107. Kar Lee says:

    Vicente, Popper’s idea of falsibility was quite convincing, and I bought into it for a while. But is the Copenhagen interpretation scientific? It has been the official quantum mechanics since the 1920’s. Paul Feyerabend’s view is much more convincing to me at this moment, and that’s why my comment about “scientific” above: http://www.marxists.org/reference/subject/philosophy/works/ge/feyerabe.htm

  107. 108. Lloyd Rice says:

    Is blindsight a malfunction of the binding mechanism? How does binding work?

  108. 109. Vicente says:

    Lloyd: that is what I think, as I tried to explain in #26 and #31. The point is that by a binding mechanism two “things” are bound, by definition. One of the things is clear: the brain, what is the other one?

    So my view is that in a GWT fashion, the information of a particular present situation is integrated, and a “data package” is created. This operation is periodically repeated at a certain rate.

    Now what happens with this data package? My opinion is that it is broadcasted to the other thing that is bound, through the bindind mechanism (comms channel/transponder).

    In parallel, the brain is equipped with autonomous modules that can use the information subconsciouslly.

    So what happens in blindsight?

    – The data package is not properly created, failing to integrate the visual information, still available for the autonomous modules.

    – The data package is not properly broadcasted, loosing part of the data.

    I believe the GWT provides a great theoretical basis to design experiments to see if in cases like blindsight, there is something wrong with afferent neural stream paths from visual cortex or something.

    The most frustrating part for me is not have a neurolab with a strong team in other to design and carry out experiments to check this ideas. I think I agree with you in that the only real stuf we have at hand is the brain.

    Kar Lee: Can current quantum mechanics body be proven wrong? think that its value come from the fantastic experimental results that underpin it, not from any “interpretation”, and as I said what is matter itself remains an issue, the point is that we are aware of it. So if one day a new BETTER theory comes in, that’s it, bye bye quantum mechanics.

    Nevertheless, as I said science has its limits. For example, Newton mechanics was not really overridden by general relativity and quantum mechanics, we just know now that it can only be applied within a certain range of energy and mass.

  109. 110. Lloyd Rice says:

    According to a recent paper, Edelman and Tononi found that they could distinguish 5 or 10 individual frequency “bands” within the typical range of EEG signals, ie., 10 or 20 to 40 Hz. So it seems plausible, as they speculated, that each frequency band could serve to link together a given set of GWT-type perceptual processing modules. But the problem I have with this is that the individual modules can do no more than to “be linked up”. So what happens next? Of what value is it to identify a specific collection of modules if they cannot talk back? There seems to be no additional mechanism for the module collection to communicate, either within the set of linked modules or with any other modules or controller.

  110. 111. Vicente says:

    Lloyd: just to be sure we understand each other. What do you understand by the “binding mechanism”. I guess you understand the mechanism by which information is integrated and we have a unicity of consciousness or conscious experience, isn’t it? because I understand by binding mechanism, the one that binds brain physiological states based on brain tissue with mind states based on qualia.

  111. 112. Lloyd Rice says:

    Well, until we have a definitive model for exactly how the brain works, it’s a little hard to say just what role the various pieces should have. The following is something of a personal fabrication, based on my understanding of Baars, Shanahan, Metzinger, Edelman, Tononi, Koch, and a few others.

    Suppose you have a roomful of people, somewhere around 50 to 1000, let’s say. Each person is a specialist in some area of expertise. The people represent the various perceptual modules of the GWT. Some fraction of the people have recently been informed of recent activity, each in their own area of expertise. The problem is to find out “what’s going on out there”. Following the Edleman-Tononi theory, let’s say each person has a small handful (say 10) colored cards. For the moment, let’s give you a supervisory role. We hope to be able to eliminate that role once we have that plan down pat. You want to know about what the video cameras on the roof are pointed toward. So you want to gather information from the video analysis experts. There are several, including those who can report types of objects, other can report colors, shapes, many other such details. Somehow, you get all of those people to hold up a red card. What does that tell you?

    For one thing, they all now know about each other. But they cannot talk to each other. They don’t have a communication channel and, besides, they would probably not understand each other’s specialty. In particular, you want these video people to somehow be able to notify the librarians and the statisticians. You need to know if objects like in the present scene have been spotted before. Even if these people could talk to each other, how do you find out what they know?

    That is my view of the binding problem.

  112. 113. Lloyd Rice says:

    My “binding mechanism” is pretty much as you describe, that is, integration. I am not yet committed as to exactly what its role is re consciousness.

  113. 114. Lloyd Rice says:

    Vicente: Would you like to try this off-line for a while? You could click on my name in any cmmt and send me an email (contact) from there. I hate to lose the audience, tho (if any).

  114. 115. Lloyd Rice says:

    I know that the “roomfull of people” metaphor has various shortcomings. But I believe it might also provide a vehicle to explore certain issues pertaining to consciousness and the mechanisms which create it.

  115. 116. Vicente says:

    Lloyd, as a metaphor is quite descriptive. I haven’t read the original papers so I don’t dare to say much. My first impression is that what the metaphor says is that there is a very importan missing role in the model, that needs to be identified in neurological terms.

    I also believe that the analogy of people holding up cards, maybe is too simple to represent brain modules. Why are you so sure that there is no cross-talking among modules, have all the neural paths been accurately mapped?

    Regarding blindsight it seems that the people in charge of locating objects manage to tell the people in charge of storing information without telling the supervisor.

    I agree that GWT seems to be a very good model to explore issues relevant for the existence of a conscious experience.

    I would really need to have a more solid and wider neuroanatomy knowledge to judge the metaphor.

    But reading your concern about the lack of inter-module communication channels, I am wondering if in my own idea what happens is that each module sends the information on its own and then the conscious experience is integrated on the other side. So what we have is a bus of parallel channels, each one transmitting its own data: vision, hearing, emotions?, etc In this way blindsight is “explained” as a result of a malfunction in the vision/sight channel.

    This view can also explain some other odd perception and psycological effects due to a problem in the synchrony of channels. Also synesthesia can be related to this, if there is an interference between channels.

    I know Lloyd that imagination is killing me, but what can I do, nobody solves my problem.

  116. 117. Vicente says:

    The previous comments on science and different people having different approaches have made me think that maybe the field of consciousness, including problems like blindsight and similar,could borrow one of the most powerful tools of Algebra and Differential Equations field, i.e. The Existence and Unicity theorems.

    Given an equation (problem), the existence and unicity theorem can guarantee if a solution exists in a certain domain, and if such solution is unique. If no solution exists then we can save the time, no point to bother to look for one.

    I wonder if it could be proven that qualia do not belong to the physical realm, irrespective of what they really are.

    So given the problem of consciousness/qualia can it be proven that no solution exists in the domain of physics?

  117. 118. Lloyd Rice says:

    Vicente: You are quite correct that there are other information channels. Specifically, all of the white matter, which consists of axons interconnecting many of the cortical regions. And, of course, nearly all cortical regions are strongly connected to other very close-by regions. So, it is as if some people in the crowd have cell phones which can call certain groups of others, but not everybody can talk to everybody.

    Now, there’s another problem. Suppose the video view consists of 5 identical candy bars (in the wrapper), each bar is numbered 1..5. So the feature detector modules detect identical patterns on each wrapper, but they are somehow kept separate for each of the 5 bars. One possibility is that you cannot have all 5 in attention at the same time. Or, if you do, you cannot also keep the individual numbering in awareness.

    In other words, certain people can hold up more than one colored card at a time. How does the supervisor sort this out? Are the white matter channels all connected up for the wrapper pattern (in general) without respect to specific bars? If so, when or how is a particular bar connected to the number detector for each individual digit and the specific physical position detectors for each individual bar?

  118. 119. Lloyd Rice says:

    As for the literature, Murray Shanahan has some of the most detailed ideas I am aware of for how the modules might communicate. He has plenty of hardware and software experience, so the papers tend to get a bit technical.

    It’s interesting, though, how some of the very early ideas of Gerald Edelman about how the white matter communication works seem to be applicable to the GWT model. An interesting aspect of all of this is the role of the thalamus.

    Don’t worry. I, too, wish I knew more about the anatomy.

  119. 120. Vicente says:

    Lloyd, I believe that the fact of having 5 identical candy bars in in the video view can never occur. I think that sequentially the attention focus moves counting up to 5 bars, and this information is stored in the short term memory, and used when required. Once the scene is complete you have the impression of the five candy bars being there, but they came in one by one.

    An interesting perception/attention exercise is to determine to what area of your visual field you can pay attention. You will find, that it is a very small spot in the center. Now, this focus moves around gathering information of the environment. Of course, perimeter visual field is also important, particularly to detect changes.

    I believe your problem arises because you have used a parallel data input, to a process that can only be serial.

    It seems that an interesting book of Shanahan about inner life is to be publised in 2010.

  120. 121. Lloyd Rice says:

    Vicente: I believe you are probably correct that the images of the bars arrive serially. There may be an initial scene analysis in which five “somethings” are perceived within the same visual field, but even this would need to be analyzed carefully because, as you say, the optical input arrives in chunks timed by the saccades.

    I look forward to the Shanahan book.

  121. 122. John says:

    “Marcel notes that patients who have a right blind field still have an underlying visual field on the right side and that this can even contain conscious visual experience. This sounds a bit like the darkish space that we all experience if deprived of visual input on one side. As Marcel says: “A question that naturally arises is whether the loss is a ‘total’ loss of visual consciousness in the blind field. It is often assumed to be so, especially by those who discuss blindsight without carefully reading the literature or working with the subjects. One can immediately respond negatively to the question..””

    Wikibooks Consciousness Studies: http://en.wikibooks.org/wiki/Consciousness_Studies/Neuroscience_1#Blindsight

    If Marcel is correct (Marcel is one of the most experienced investigators of blindsight) then all that “blindsight” implies is that subconscious/non-conscious behaviour is possible.

    Given that the CNS is a multi-level, hierarchical processor blindsight is not in the least surprising. Does the tendon reflex indicate something mysterious or nothing more than that the CNS has spinal processing as well as cerebellar, pontine, thalamic, cortical etc. processing?

  122. 123. John says:

    Vicente said: “An interesting perception/attention exercise is to determine to what area of your visual field you can pay attention. You will find, that it is a very small spot in the center. Now, this focus moves around gathering information of the environment. Of course, perimeter visual field is also important, particularly to detect changes.”

    Is experience a space (ie: does it have simultaneous content)? Even having a spot in experience suggests simultaneous content so is Vicente assuming that conscious experience is a projected hyperspace or observing that this is the case? If we were scientists we would not dismiss our observations so lightly – see http://newempiricism.blogspot.com/2009/10/simple-summary.html

  123. 124. Vicente says:

    John: I apologise, I don’t understand very well your point. I will try to explain what I tried to say.

    One thing is the content of your conscious experience, which I believe can be compound of different “objects/concepts” simultaneously, and another thing is how that content got there.

    What I am saying is that you cannot pay due attention (or perceive in detail) to several objects in your visual field concurrently. You cannot even pay attention to the whole of one single object. Try, look at the “S” in the “Submit Comment” button and tell me the last letter of the two words that you can clearly identify.

    The point is that when your brain is trained, your pattern recognition networks are extremely fast, so the exploration and construction of a certain scene can be done at an incredible speed. That is why sometimes we feel that we can almost read a whole long sentence in one go. I remember reading about experiments in which the brain sometimes just takes a few parts and then fills in the gaps.

  124. 125. John says:

    Vicente, the point I was making in my two previous posts is that we can respond to stimuli without them passing through our conscious experience. The CNS is a stack of processors with the higher level processors controlling the lower level processors. Our eyes flick towards a stimulus before there is any possibility of bodily response and the cerebellum provides us with a smooth gait without any conscious intervention etc…

    The reason we can read a whole sentence in one go is that most of the processing involved is non-conscious. The non-conscious parts of the brain just present the conscious parts with their handiwork. Could you imagine how time consuming it would be for me to write this post if I did it all consciously? I might start with working out how I created the phrase “could you imagine” but really I would not even know where to start! It was all done down there in my non-conscious speech processor.

    As you say, the communication between the space that is conscious experience and the non-conscious parts of our brain is feeble at best. Conscious experience is damn near “epiphenomenal” but not quite – after all, you told me about the form of your experience in your post above:

    “An interesting perception/attention exercise is to determine to what area of your visual field you can pay attention. You will find, that it is a very small spot in the center. Now, this focus moves around gathering information of the environment. Of course, perimeter visual field is also important, particularly to detect changes.”

    If you can tell me what it is like it cannot be epiphenomenal, its powers of communication with the rest of the brain are, as you say, pretty restricted though. But what is really weird about your conscious experience is that it is a geometrical form, a space hanging in a projective geometry. See http://newempiricism.blogspot.com/2009/02/time-and-conscious-experience.html

  125. 126. Vicente says:

    John: Yes, I agree with what you say in the first three paragrahs. Regarding the last paragraph, I would need to go again through your page and reflect more on it before saying anything.

  126. 127. Vicente says:

    John, just a quick reaction. I am not sure that vision is strictly speaking a projective geometry since we have binocular vision. If you just look through one eye it definitely seems to be, which is reasonable, being the retina the projection plane.

    I would say that what is really weird about my conscious experience, apart from everything is the rest.

  127. 128. John says:

    Binocular vision is probably best characterised as a “distortion field”. Hold up your finger in front of this screen, the result is not 3D, it is the placement of a semi-transparent finger shape in front of the screen. 3D is where the rear of an object can be inspected as well as the front.

    Our sense of depth – which we call “3D” vision but is not 3D – seems to be related to the possibility of action, such as “reaching out” and I would guess it is more linked to the existence of time within our conscious experience.

  128. 129. John says:

    Vicente, I am not sure that I have understood you. Are you a Direct/naive Realist?

  129. 130. Vicente says:

    John: not at all. I don’t really buy any of the solutions that can be found in literature. If I were forced to make a choice, I would say that I am close to “Interactionist Dualim”, acknowledging that I haven’t got a clue of what is on the other side, all the problems related to an unknown agent interacting with the brain, logical problems like infinite regressions, etc, etc…

  130. 131. Jordan says:

    Not sure if this has been mentioned in the 130 other comments, but your definition of blindsight isn’t entirely accurate. Blindsight patients can correctly point to the stimulus in their blindspot, but that’s not by itself constitutive of the phenomenon: they can also localize it verbally, as well as discriminate it by shape and by type of movement.

  131. 133. robot painter says:

    I like it when people come together and share
    opinions. Great blog, continue the good work!

  132. 134. apostas desportivas says:

    I’m really loving the theme/design of your web site.

    Do you ever run into any browser compatibility issues?
    A couple of my blog audience have complained about my site
    not operating correctly in Explorer but looks great in Safari.
    Do you have any suggestions to help fix this problem?

Leave a Reply