Blind Brain

Blind AquinasBesides being the author of thoughtful comments here – and sophisticated novels, including the great fantasy series The Second Apocalypse – Scott Bakker has developed a theory which may dispel important parts of the mystery surrounding consciousness.

This is the Blind Brain Theory (BBT). Very briefly, the theory rests on the observation that from the torrent of information processed by the brain, only a meagre trickle makes it through to consciousness; and crucially that includes information about the processing itself. We have virtually no idea of the massive and complex processes churning away in all the unconscious functions that really make things work and the result is that consciousness is not at all what it seems to be. In fact we must draw the interesting distinction between what consciousness is and what it seems to be.

There are of course some problems about measuring the information content of consciousness, and I think it remains quite open whether in the final analysis information is what it’s all about. There’s no doubt the mind imports information, transforms it, and emits it; but whether information processing is of the essence so far as consciousness is concerned is still not completely clear. Computers input and output electricity, after all, but if you tried to work out their essential nature by concentrating on the electrical angle you would be in trouble. But let’s put that aside.

You might also at first blush want to argue that consciousness must be what it seems to be, or at any rate that the contents of consciousness must be what they seem to be: but that is really another argument. Whether or not certain kinds of conscious experience are inherently infallible (if it feels like a pain it is a pain), it’s certainly true that consciousness may appear more comprehensive and truthful than it is.

There are in fact reasons to suspect that this is actually the case, and Scott mentions three in particular; the contingent and relatively short evolutionary history of consciousness, the complexity of the operations involved, and the fact that it is so closely bound to unconscious functions. None of these prove that consciousness must be systematically unreliable, of course. We might be inclined to point out that if consciousness has got us this far it can’t be as wrong as all that. A general has only certain information about his army – he does not know the sizes of the boots worn by each of his cuirassiers, for example – but that’s no disadvantage: by limiting his information to a good enough set of strategic data he is enabled to do a good job, and perhaps that’s what consciousness is like.

But we also need to take account of the recursively self-referential nature of consciousness. Scott takes the view (others have taken a similar line), that consciousness is the product of a special kind of recursion which allows the brain to take into account its own operations and contents as well as the external world. Instead of simply providing an output action for a given stimulus, it can throw its own responses into the mix and generate output actions which are more complex, more detached, and in terms of survival, more effective. Ultimately only recursively integrated information reaches consciousness.

The limits to that information are expressed as information horizons or strangely invisible boundaries; like the edge of the visual field the contents of conscious awareness  have asymptotic limits – borders with only one side. The information always appears to be complete even though it may be radically impoverished in fact. This has various consequences, one of which is that because we can’t see the gaps, the various sensory domains appear spuriously united.

This is interesting, but I have some worries about it. The edge of the visual field is certainly phenomenologically interesting, but introspectively I don’t think the same kind of limit seems to come up with other senses. Vision is a special case: it has an orderly array of positions built in, so at some point the field has to stop arbitrarily; with sound the fading of farther sounds corresponds to distance in a way which seems merely natural; with smell position hardly comes into it and with touch the built-in physical limits mean the issue of an information horizon doesn’t seem to arise. For consciousness itself spatial position seems to me at least to be irrelevant or inapplicable so that the idea of a boundary doesn’t make sense. It’s not that I can’t see the boundary or that my consciousness seems illimitable, more that the concept is radically inapplicable, perhaps even metaphorically. Scott would probably say that’s exactly how it is bound to seem…

There are several consequences of our being marooned in an encapsulated informatic island whose impoverishment is invisible to us: I mentioned unity, and the powerful senses of a ‘now’ and of personal identity are other examples which Scott covers in more detail. It’s clear that a sense of agency and will could also be derived on this basis and the proposition that it is our built-in limitations that give rise to these powerfully persuasive but fundamentally illusory impressions makes a good deal of sense.

More worryingly Scott proceeds to suggest that logic and even intentionality – aboutness – are affected by a similar kind of magic that similarly turns out to be mere conjuring. Again, results generated by systems we have no direct access to, produce results which consciousness complacently but quite wrongly attributes to itself and is thereby deluded as to their reliability. It’s not exactly that they don’t work (we could again make the argument that we don’t seem to be dead yet, so something must be working) more that our understanding of how or why they work is systematically flawed and in fact as we conceive of them they are properly just illusions.

Most of us will, I think want to stop the bus and get off at this point. What about logic, to begin with? Well, there’s logic and logic. There is indeed the unconscious kind we use to solve certain problems and which certainly is flawed and fallible; we know many examples where ordinary reasoning typically goes wrong in peculiar ways. But then there’s formal explicit logic, which we learn laboriously, which we use to validate or invalidate the other kind and which surely happens in consciousness (if it doesn’t then really I don’t think anything does and the whole matter descends into complete obscurity); hard not to feel that we can see and understand how that works too clearly for it to be a misty illusion of competence.

What about intentionality? Well, for one thing to dispel intentionality is to cut off the branch on which you’re sitting: if there’s no intentionality then nothing is about anything and your theory has no meaning. There are some limits to how radically sceptical we can be. Less fundamentally, intentionality doesn’t seem to me to fit the pattern either; it’s true that in everyday use we take it for granted, but once we do start to examine it the mystery is all too apparent. According to the theory it should look as if it made sense, but on the contrary the fact that it is mysterious and we have no idea how it works is all too clear once we actually consider it. It’s as though the BBT is answering the wrong question here; it wants to explain why intentionality looks natural while actually being esoteric; what we really want to know is how the hell that esoteric stuff can possibly work.

There’s some subtle and surprising argumentation going on here and throughout which I cannot do proper justice to in a brief sketch, and I must admit there are parts of the case I may not yet have grasped correctly – no doubt through density (mine, not the exposition’s) but also I think perhaps because some of the latter conclusions here are so severely uncongenial. Even if meaning isn’t what I take it to be, I think my faulty version is going to have to do until something better comes along.

(BTW, the picture is supposed to be Thomas Aquinas, who introduced the concept of intentionality. The glasses are suppose to imply he’s blind, but somehow he’s just come out looking like a sort of cool monk dude. Sorry about that.)

 

41 thoughts on “Blind Brain

  1. What about intentionality?

    Well if you shy away from thinking about intentionality it could be because you feel you are approaching an horizon in your consciousness…

  2. Peter: “It’s as though the BBT is answering the wrong question here; it wants to explain why intentionality looks natural while actually being esoteric; what we really want to know is how the hell that esoteric stuff can possibly work.”

    I’ve always been a bit puzzled about what is meant when I read philosophers’ discussions of intentionality. As I see it, whenever we think about anything, we are thinking *about* something in the world; this includes what we take to be real objects and events as well as any and all fictional objects and events. Is this all that philosophers mean when they use the word “intentionality”? Peter? Scott? Anybody? Please help me on this.

    If this is what is meant by *intentionality*, then I think Peter’s question about “how the hell that esoteric stuff can possibly work” gets a reasonable answer in the neuronal structure and dynamics of our putative retinoid system and its synaptic connections to our preconscious cognitive mechanisms. See *The Cognitive Brain* (1991).

  3. intentionality |in?ten ch ??nalit?|
    noun
    the fact of being deliberate or purposive.

    Our brains evolved to make predictions for the purpose of solving predictive problems. Philosophers and scientists (Dennett, Dawkins) that don’t like to ascribe purposes to evolution seem to have no problem with intentionality, which is rather a neat way of having purposes arise and prosper from our intentions, rather then the other and more accidental way around.

  4. Arnold, good point, I’ve felt similarly puzzled for long a time already.

    What I believe is going on is that it is becoming clearer that our “psychological understanding” of reality (the world behaviour) is based on metaphorical models (see Dehaene, Lakoff et. al), that we have intuitively integrated in our mind through naive perception. Then, we need to mirror the system under consideration in terms of those daily well known phenomena. If this can’t be done, whether it is modern physics or consciousness, the thing becomes esoterical.

    For this reason I am more convinced everyday that an analytical and epistemological approach to the hard problem of consciousness is quite hopeless, and only an ontological frame based on a meditative strategy could yield some truly satisfactory results, if any.

    You probably won’t agree, but it all depends on the tolerance to frustration each of us has.

  5. Our brains deal in intentionality so much they may have the kind of difficulty recognising it that fish (possibly)have with water.

    There’s a short account I wrote several years ago here , but I doubt whether that will help catch the mystery. The following passage quoted from here about the strange ability of intentionality to make future or imaginary things have causal effects on us might perhaps be helpful.

    One of the more mysterious aspects of intentionality is the ability of intentions (along with beliefs, desires, and so on) to address the future. The footballer’s kick, for example, occurs because of a possible goal which hasn’t happened yet (and indeed, may never happen). If intentions really allow future possibilities to influence present events in this way, they clearly do break the normal causal series: it isn’t that people are somehow uncaused causes, but rather that the causes of intentional behaviour lie in the future rather than the past. It’s the intentionality of intentions, and specifically their future-directedness, which runs counter to normal causality.

    The dilemma created by this answer, however, seems as bad as the original problem. On the one hand, it seems clear that we do routinely think about things that haven’t happened yet, and base our actions on those thoughts. It surely can’t be true, however, that the future causes our thoughts in any direct physical sense. For one thing, if that were so, our beliefs about the future would be as definite and particular as our memories of the past: for another, we can think about things that don’t exist and never could just as easily as we can think about the genuine future. In fact, it seems clear that there is no fundamental difference between these two cases; when we think about the future it is an imagined future we address.

    A natural response to the impasse is therefore to say that our intentional acts are caused, not by the future itself, but by mental images or representations of it. But this merely pushes the problem back a step. Aren’t images caused by the thing they represent? It is the shape and colour of the object which determine the shape and colour of its picture. If our mental representations are caused by possible future events, we are back with the original problem of how something in the future can affect events now. If, on the other hand, the images are not caused by the events they represent, how do they come to have anything to do with them, and in what sense can they be representations?

    But can I put in a plea that we talk about the BBT specifically here? If you want to go back to basics on intentionality I’ll happily do a post on that next week(ish).

  6. The Stanford Encyclopedia of Philosophy defines intentionality this way:

    *Intentionality is the power of minds to be about, to represent, or to stand for, things, properties and states of affairs.*

    Scott claims (if I properly understand him) that his BBT implies that intentionality and representation are mere illusions. While I certainly agree that we are each blind to the workings of our own brain, I think it is a mistake to think of intentionality– our transparent representation of objects and events in the world around us — as an illusion. While it is true that the brain mechanisms that give us our phenomenal world/intentionality can and do generate illusions (e.g., the moon illusion), the major content of our subjective representation of the world is matched closely enough to the real world to enable our survival and the achievements of human society.

    Peter suggests that our brains deal in intentionality so much they may have the kind of difficulty recognising it that fish might have with water. I agree. Our phenomenal world is such an omnipresent and intimate presence that we fail to see it as the fundamental referent of our concept of consciousness.

  7. “Peter suggests that our brains deal in intentionality so much they may have the kind of difficulty recognising it that fish might have with water.”
    Must be that fish don’t recognize that water serves their purposes.

  8. It seems to me that Bakker is saying that there is some sort of informational limit on experience imposed by virtue of the fact that the field of retinal vision is limited by the radial extent of the retina itself. And if we could move beyond this limit then we could have access to the complete panoply of the underlying subconscious functionality that selects for conscious experience. The problem with this is that we do in fact have phenomenal access to whatever visual experience is available albeit not simultaneously. Whats missing here is not content but motive: What are the motivational concerns that bring this content to the fore in the first place? How would a complete representation of all possible experience somehow also then reveal the actual underlying motivational considerations that drive this experience.

    I would propose that the notion of intentionality is an abstract or indirect expression of these underlying concerns which are, quite simply, profoundly inaccessible via experience no matter how broad its purview.

  9. Let’s assume that the Recursive Structures and their horizons and limited data are all that is important in the generation of our highly prized sense of self, intentionality and being present. I do not know what percentage of the brain this will require but I guess it will be a very small (if busy) part.

    What is the remainder of the brain doing? I suggest that it is processing a lot of sense data, disregarding unimportant data, triggering automatic unconscious responses to much of the remainder, and only passing on what needs to be handled with the special ‘tricks’ of consciousness.

    So much of what we intend and do is run automatically. You don’t usually think about walking, or navigating to the cafe/canteen/kitchen/locker room to eat your sandwiches. But you might need some ‘extra’ localised conscious thought about whether or not you should eat cheese and onion sandwiches before you meet your boss (or loved one etc.)

    From this standpoint the ‘functions’ of the RS naturally arise from its structure – but this might help discriminate between what consciousness is, what it is ‘for’, and what it feels like.

  10. One of Scott’s suggestions is; “And we would finally see that what we presently call ‘consciousness’ was in some sense a profoundly deceptive cartoon.”

    It stuck me that a radically ‘simplified’ cartoon of reality might be more immediately beneficial than a full featured, 3D, all alternatives considered, detailed picture. Better to be quick, right enough, and alive, than deliberate, totally correct, and dead.

  11. by limiting his information to a good enough set of strategic data
    This is part of the heuristic nature here – the ‘good’. Why is it good enough? Because…just think that and whatever it is that makes us default to thinking it’s just good enough – well, where did that come from? A blind side?

    Surely it must seem somewhat disturbing to have ‘good enoughs’ coming out of nowhere, when if you ponder it, ordering the wrong sizes of boots have lost battles? Where else has a ‘good enough’ come out of nowhere and given it’s blessing to an act, which if pondered, just wasn’t good enough?

    It’s not that I can’t see the boundary or that my consciousness seems illimitable, more that the concept is radically inapplicable, perhaps even metaphorically.
    How can something be both not illimitable, yet not have a boundry?

  12. We consciously examine everything we can sense in our environment, but obviously within the limits of our individual sensory capacities. We are looking both for what we expect to find, and what we are finding that was unexpected. We automatically distrust the unexpected and based on our instincts and past experiences with observing patterns, attempt to discern the purposes for anything and everything being where it should or shouldn’t be.
    We might have a multitude of emotional reactions based on this continuous conscious and “subconscious” analysis, and are consciously examining and responding to these earlier reactions as well. We are conscious on different levels while neither may be conscious of another’s operations, even while they continually make each other conscious of their analytical results.
    In other words we are conscious all the time on different levels while we tend to only give the executive level credit for consciousness at all.
    You may say of course that i am misconstruing the nature and effects of consciousness, but any of the multitude of our choice making functions that are dealing with our sensory apparatus are necessarily conscious of the operations.

  13. Michael,

    It seems to me that Bakker is saying that there is some sort of informational limit on experience imposed by virtue of the fact that the field of retinal vision is limited by the radial extent of the retina itself. And if we could move beyond this limit then we could have access to the complete panoply of the underlying subconscious functionality that selects for conscious

    Well, which is not fully consistent…

    Despite there is a border for the visual field imposed by the eye geometrical anatomical boundaries, there is a much more important constraint in what to consciousness concerns, which is the fact that from the whole visual field we can only focus our attention in a very small region, and really be conscious about it (intentionality).

    Even more, the fact of paying attention to that spot, and being aware of it, completely changes the conscious experience, leading to a more mindful and quite different experience.

    So, the perceptual phenomenal theather is part of a much broader experience, and I wouldn’t say the most important one.

    I have the impression that Bakker is confusing the senses limits with the limits of conscious experience.

    Probably the idea of informational horizons could be replaced by more general and encompassing “global states of mind” allowing the theory to cover a much greater area. I confess that I have gone through the paper in a quite fast and shallow fashion and I should reflect a bit more upon it.

  14. Vicente, I think Michael may have missphrased it as due to the radial extent of the retina. The idea is the radial extent of the minds capacity to read data from the eye. This extent fades out and you don’t have any sense of where it fades out – no one see’s with a little border around the edge, showing where their capacity to see ran out. There is no sense for a lack of sense. Nature is too stingy.

    The idea is that being unable to see borders can easily mean there are alot of them one is unaware of. You go from the mind, to the mind with thousands of borders wedged into it from all angles and in all sorts of spots. All invisible.

    Roy, it’s also possible that rather than an executive level, conciousness is skewed across multiple levels – some of them taking up more, less or even none of it at times. Not one singular command center. More like an internet, perhaps?

  15. My quick version of the problem of intentionality, and then I will tie it in to Scott’s vision.

    I like the frog example. We have a talking, thinking, semi-intelligent frog. She is in the swamp eating flies on a daily basis. A scientist comes along and asks “When you see flies, you are then thinking about flies?” Yes, responds the frog. The scientist takes the frog to a lab and devises experiments which shows, since the frog stops catching the fly with the tongue, that what the frog is seeing are only “fast moving specs” and is then inferring that they are flies. Given the makeup of swamps, “fast moving specs” happen to always be flies and therefore the frog has never had a problem catching flies in the past.

    Then, the scientist gives the frog some contacts and a hearing aid and the frog “sees” and “hears” flies more like humans do. “Now,” the frog says, “I only -see- flies and therefore my thoughts when I see them are only -about- flies.”

    The problem is that a brain/mind representation is only ever a representation. It is a culling and concentrating of sensory information to form an “image” of what is supposedly out there. However, the representation is never the object. The problem I would argue, similarly to what I think Scott is highlighting, is that some kind of phenomenological trick, such as the frog always being right (his seeing of “fast moving specs” always aligns with the desired result of tasting flies), means that we mistake our inner mental image for faithfully reproducing the object in itself. Our “aboutness” takes on more than just some kind of relation between self and world occurring, and instead we make the frog’s mistake of thinking that our perceptions and representations are sanguine and co-extensive in some strong, infallible way.

    We form thoughts “about tables” because tables are useful to us, whereas a talking, thinking electron will form thoughts “about an area of denser atoms stretching for as far as her eye can see.” That’s not to be anti-realist about “tables,” but there is an interesting play between what comes to our consciousness, what our brain is doing, and then even why our brain/mind/consciousness is choosing to categorize the world in the way that it does. In the meanwhile, it is only the “consciousness” that we see and we cannot ‘image’ine the world as seen by the electron; we think we get something right “about” tables in our representation of tables. And we do, I would accept, being the bodies that we are, our hands do not and cannot pass through the table (unlike perhaps our electron can), and it is therefore useful that we “see” tables as being “solid”. But again, the pragmatic benefit of our representation of the table, the relation between our image and the world, does not mean our representation is “about” tables in the way it is presented to our conscious self and thus in the way our consciousness is thinking about the world, that is when we are not in philosophical discourse or when doing atomic physics.

  16. Lyndon: “But again, the pragmatic benefit of our representation of the table, the relation between our image and the world, does not mean our representation is “about” tables in the way it is presented to our conscious self and thus in the way our consciousness is thinking about the world, that is when we are not in philosophical discourse or when doing atomic physics.”

    1. Our consciousness does not *think* about the world. Our consciousness *is* our personal phenomenal world.

    2. All of our thinking *about* the world that we live in is done pre-consciously and the results of this pre-conscious cognition are projected as inner-speech and sensory images (~ 500 ms delay) to become part of our occurrent conscious experience.

    3. The pragmatic benefit of our conscious representation of a table is multifold; e.g., we can put things on it; we can sense its dimensions and thus avoid bumping into it; we can move it around to fit a convenient space, etc. This kind of utility holds for most other aspects of conscious representation, which leads to the common default assumption that we have a direct acquaintance with the real world. As Scott puts it, we are blind to the fact that what we think of as the real world is really a construction of our brain. The scientific problem is to figure out how the brain does this trick. My proposed solution is the retinoid system working in recurrent activation with pre-conscious cognitive mechanisms such as synaptic matrices in the sensory modalities. See in *The Cognitive Brain*, “The Pragmatics of Cognition”, pp. 300-301, here:

    http://people.umass.edu/trehub/thecognitivebrain/chapter16.pdf

  17. Callan asks, “Not one singular command center. More like an internet, perhaps?”
    Yes and no, because the internet was not constructed to serve the same purposes that we serve. We, in effect, have continuously constructed ourselves according to whatever new purposes we’ve discovered that we’ve then chosen, for whatever reasons, we can (if not should) better serve.
    We do have more than one command center, but the center that chooses when to take a step, may then sit back and let another center choose how to move our foot in anticipation of what it will land on. Executive delegations of authority perhaps. There’s an argument going on in some circles as to where the final decisions to take actions are made. Some that I tend to agree with say that center is in our emotional system with the more modern rational system having veto power. Others say that the rational system has delegated its “executive” authority to our other systems. But in a way, we’re seeing that these systems are competing for authority but in the end cooperating to act on it.
    This is an extremely complicated process that we continuously engage in. Like little groups fighting internally for hierarchical status so that its denizens can cooperate to resourcefully compete with other groups in the bigger external picture.

  18. No-one so far has mentioned blindsight, where the brain has found ways to imagine with more accuracy than not what’s out there that it realizes, or comes to realize, it hasn’t been able to sense visually. Our senses can of course be fooled, accidentally or deliberately, but we have evolved in various ways to counter these attempts and have continued to more successfully survive accordingly.

  19. Arnold,

    On 1, I agree I stated that poorly. However, there is a conceptual problem of what “consciousness is” that Peter highlighted in the original post. When we “consciously” add a sum up, or consciously acknowledge the processes that we had already devised, I am not sure what we are going to mean by “our personal phenomenal world.” It seems to me that there are contents to our consciousness that are “representing” logical structures, that are “thinking” so to speak. As far as the lag question, I do not see why we could not consider both the unconscious and the conscious as representational, as “thinking about” the world, unless we are absolutely epiphenomenal. And I am using representation very loosely here, everywhere.

    On 3, don’t we have to accept that it is not “conscious” representation that allows us to use the solidity of the table to our benefit, but only the representation of such solidity, which as you just stated should occur prior to our conscious awareness of such. The question becomes: What does consciousness add to our representations? Nicholas Humphreys gives one positive take in Soul Dust, mirroring takes by others, which I did not find entirely satisfactorily. But we could also just chalk it up that it was easier or evolutionarily necessary to form complex representational creatures that happened to also be conscious, than ones who were representational but were not conscious. I have actually come to accept that consciousness is not a property or quality that is separate from an immense, self-focused, self-representational system, so I would probably just toss the whole question out.

  20. Hello Roy,

    I suppose I’d add that what if the purpose that you mentioned, shifts due to those little groups fighting, but the shift in purpose is not acknowledged/not explicit? As if the same purpose was being followed as before? What does this make of the centralised idea?

  21. Callan, groups are cultural entities, and formed for cooperative purposes, the main purpose being to survive in the competitive search, at least as it’s been necessary on earth, for earth’s resources. As I’ve written elsewhere, “when it comes to competing for resources, that competition is the impetus that makes cooperation happen at all. We can, in other words, cooperate to compete for resources. We cannot compete to cooperate for resources. There’s no strategic purpose there at all.”

    And so in my view all our strategic purposes are both individually and culturally acquired, and evolve according to our naturally and deliberately evolving circumstances – and in the end they necessarily evolve the creatures that, through experiencing their different circumstances, have acquired and continually evolved their varied strategies.

  22. Lyndon: “I do not see why we could not consider both the unconscious and the conscious as representational, as “thinking about” the world, unless we are absolutely epiphenomenal. And I am using representation very loosely here, everywhere.”

    The point is that there is a crucial difference between the brain’s conscious representations and all of the vast number of its parallel unconscious representations. Our conscious representations are a coherent global presentation of our phenomenal world (retinoid space), where all objects and events are spatiotemporally bound in perspective within an egocentric plenum. This is the world that our unconscious cognitive mechanisms *think about*. The phenomenal world can *represent* the thoughts of the preconscious mechanisms, but it does not do the thinking (analytic work) about itself. This is an important distinction.

    Lyndon: “The question becomes: What does consciousness add to our representations? ”

    The short answer is *subjectivity*. There is no subjectivity without an egocentric perspective. Each representation in our preconscious cognitive mechanisms stands alone and therefore is not a conscious representation. It is only *after* such representations are projected and bound in egocentric space that they become part of our subjective/conscious experience. Think of it this way: Without consciousness we would have no world to talk about, or to explore, or to understand. Most creatures in this world exist in this cognitively depleted state.

  23. “Without consciousness we would have no world to talk about, or to explore, or to understand. Most creatures in this world exist in this cognitively depleted state.”
    Actually we live in a cognitively enhanced state that we’ve evolved from the “most creatures” that were formerly us. Which we would not have done if we hadn’t culturally communicated in our primitive attempts to explore and understand it.
    We have always been aware, and have striven to be as conscious of that awareness as we can.

  24. Roy, I think I understand groups are cultural entities and I’m fine with that. But are you saying you are a cultural entity, ie a group in your skull? If not, as I said, I’m proposing you are. Proposing that conciousness is sleighted across a group.

    I’m not trying to argue about regular groups and their cooperative habits.

  25. To me the heavy lifting of the BBT is done by its premise; that a conscious first person perspective can be achieved in a recursive system. Though I love cool new algorithms, I can’t see anything like subjectivity coming from recursion or complexity or systems ‘watching’ other systems. What I see in these ideas is an objective set of instructions that execute upon the data they receive. As a scientist it seems wise for me not to ‘breathe life’, so to speak, into these algorithms. That is to be careful not to mix up the intentionality that is a product of my mind with the components, inputs and outputs of the algorithm. The ‘instructions’ I see, sitting in front of my computer, are just an intentional label my mind uses to categorize a particular arrangement of charge distribution on my computer’s memory. This charge distribution will generate a sequence of electron currents that will transmit this comment when I hit the submit button – a result that my mind feels is meaningful.

    Presuming that consciousness is reducible to an algorithm then what would cause us to insist that it was not? This is an interesting question, but what would really turn me is proof of the supposition.

  26. Callan, re where cultural groups reside, etc.:
    Well, all group’s cultures reside in the memories of their living members, but also in their books, art, music, architecture, etc. So that we are individually conscious of what we have retrieved from memory, or for from these other sources. Cultures result from intercommunication within a group or family. (I don’t think communication between our own emotional and rational brains, for example counts as cultural, however; but on the other hand if it helps us evolve, perhaps it should.)
    And we are conscious of communicating to others the learned experiences that will, hopefully or not, be added to our cultures. But that consciousness is conscious OF this information, not a consciousness WITHIN the information itself. Information has no meaning outside of the entities that it’s meant or destined to be deciphered by.
    Not that these cultural entities must be unaware of their conditional existence; but even so, they won’t otherwise be aware of themselves as culturally meaningful artifacts.

  27. I also meant to add that, at least after my first read, I find the BBT paper to be an inversion of Descartes’ Evil Demon. In this case the brain is the ‘evil demon’ who restricts information to the ‘mind.’ The argument starts with this manipulation and works back (in the sense of Descartes’ progression) to argue the opposite conclusion: That intentional thought is an illusion. Perhaps this comparison is simplistic and misleading, but this odd thought keeps hanging around me.

  28. The odd thought that hangs around me is that when most of us see a table top, we know that we can’t see inside the surface, or the underside from there, etc., etc. But then again, depending on our purposes, we know we can find the other sides of the table if we need to, and will have little difficulty in constructing a new one when purpose serves.
    And if need be, we could manage to evolve extendable eye sockets, as some other creatures do, to look around and under the table in an environment where more nasty creatures regularly hid there.
    But see, we don’t have to – we’ve used our otherwise blind brains to invent tools for that. Perhaps, then, we’re as conscious of what we can’t see as of what we can.

  29. Roy,

    But that consciousness is conscious OF this information, not a consciousness WITHIN the information itself. Information has no meaning outside of the entities that it’s meant or destined to be deciphered by.

    Suppose you had a device which wrote information – it could also read what it wrote, but that got to a certain limit, in that it couldn’t read the very act of itself writing. So this was beyond the device.

    And the device wrote in it’s information ‘I am concious’.

    The device just couldn’t catch up with its own process – and so this blind spot was given a special name in the information it wrote.

    Do you think the device should be able to read it’self as it reads – or read itself as it writes? Or that there is room to assume a blind spot in there?

  30. Scott’s having technical problems, but he’s kindly provided the following comment…

    Cool beans! Thank you, Peter. You have indeed grasped the nub of the position, as well as the main intuitive reasons why people should be skeptical. I personally find it difficult to believe more than a fraction of the thing at any given point in time. I’ve come to realize that the primary difficulty people have grasping the gestalt of the position has to do with my background in continental philosophy and its long tradition of thinking through problems in terms of ‘neglect.’ If you consider the way hemineglect or anosognosia beggar commonsense, and the fact that BBT argues that the best way to explain the consciousness we think we have is to assume that we are all victims of a kind of natural anosognosia, you can get a sense of the counterintuitive demands it makes.

    If the subject matter were anything but *consciousness,* I would be inclined to reject the approach entirely. Add to that this very simple question: If the primary symptom of patients suffering neuropathological neglect is the inability to cognize their cognitive deficits, then how do we know that we don’t suffer from any number of ‘natural’ forms of neglect? The obvious answer is, We don’t. BBT simply asks: Could these be the reason why we find consciousness so stupendously difficult to understand?

    Regarding intentionality the problems are numerous and far ranging. Intentional concepts violate compositionality (the problem of propositional attitudes). Intentional concepts resist analysis (they, as Wittgenstein pointed out, seem to have a ‘fixed’ locus in language). Intentional concepts seem to be incompatible with functional concepts (which is why neuroscience is generating a conceptual crisis in various legal and ethical domains). And this seems to be related with the most damning problem of all from a scientific perspective: the way they resist naturalization. Whole philosophical movements and schools, such as representationalism, computationalism, biosemantics, are founded on the explicit premise that intentionality can be naturalized. Implicitly, pretty much every position under the sun leans on intentional assumptions–that, once again, no one can explain. These are ongoing projects involving some of the most brilliant minds in the biz, and yet despite their best efforts, all their attempts to outline ways that intentionality could be naturalized in principle contain holes big enough for buses to drive through.

    But, as Peter points out, it seems to be the water, and we the perplexed fishies. For those who are interested, my latest post at TPB happens to be on Dennett and the intentionality debate: http://rsbakker.wordpress.com/2012/11/10/meathooks-dennett-and-the-death-of-meaning/

    Otherwise, I heartily encourage anyone with questions and counterarguments to sound off here.

    Thank you again, Peter. You remain the most open mind/brain in the consciousness blogosphere. All the rest of us have far too many theoretical axes to grind!

  31. Scott: “Implicitly, pretty much every position under the sun leans on intentional assumptions–that, once again, no one can explain.”

    Scott, I claim that the retinoid model of consciousness explains *intentionality* as defined in the Stanford Encyclopedia of Philosophy. What are your counter-arguments?

  32. “the best way to explain the consciousness we think we have is to assume that we are all victims of a kind of natural anosognosia,”
    Well, we’re all ignorant of what we’re ignorant of, and that includes the ignorance of the fact that these paradoxical and perplexing word games are in the end merely, and often silly, games.
    And if we are in fact purposively constructed creatures, intentionality reflects the need to serve the variety of purposes we were in particular constructed (as in evolutionarily self constructed) to expect, anticipate and choose to give service to. Neo-Darwininas such as Dawkins and Dennett don’t and won’t see the role that purpose plays at all in our evolution, hence the rationalizations that run rampant in these blogs, fueled by pre-Darwinian savants as much as their post-Dawinian gurus.

  33. Callan, a device that wrote information would not need to understand what it wrote if the understanding was in the intelligent entity that operated the device. If the device operated itself and wrote meaningful information, then it was a meaningfully intelligent device to begin with. The only meaningfully intelligent entities that can do such as alphabetically symbolic writing that we know of are humans, but our animalistic ancestors could meaningfully pass on information in other ways. Ancient single celled bacteria “wrote” meaningful behavioral instructions in their memory systems, except that we can only speculate as to how this worked. We are fairly sure however that these “writings” were selectively retained to pass on to their progeny by duplication of the systems that retained them.
    “And the device wrote in it’s information ‘I am concious’.”
    In which case it was either alive or lying in a Wittgensteinian sort of way.

  34. Everything in the universe is arguably aware, and consciousness is our term for the degree of awareness that some of these things have evolved to become aware of that had in and of themselves evolved awareness. Awareness is essential to intelligence and vice versa, arguably being parts of the same strategic thing, and the universe is more arguably a strategically intelligent collection of cooperative and competing systems. Some that have no choice except to evolve to, in the end, make choices. And evolve the intelligence that choice making has then required. Evolution is thus purposive in its continual acquisition of specific purposes.
    Intentionality, as the dictionary in its wisdom says,
    noun
    the fact of being deliberate or purposive.

  35. but our animalistic ancestors could meaningfully pass on information in other ways

    In some ways that makes me think of a trapeze act – where one side lets go of their swing, but then the other side catches them. The whole thing could fall into the net, but doesn’t. But does that make meaning present, or simply means the idea of meaning is supported? Sort of a mime, where as long as the acrobats catch each other rather than hit the net (or the floor), the mime is effective?

    In which case it was either alive or lying in a Wittgensteinian sort of way.

    What would be lying?

  36. If you’re asking about the nature of meaning, that’s a question too big to be adequately addressed here. We can infer meaning from signals that were not intended to confer it, for example. Or we can talk uselessly until we’re blue to those who think literally but can draw virtually no abstract inferences.
    What would be lying? Again, too big a subject to address here. I’d ask you to read my book on that, but I don’t like it all that much when people try to sell their books on other people’s blogs.

  37. I wonder if consciousness studies are stymied by the need in theory-building to start from undefined or “primitive” terms–and that the problem of consciousness, a study of subjective experience, ultimately and necessarily reduces to an attempt to define its own primitive terms. I will hazard a guess that the “how” questions of information processing can be answered, perhaps by neuroscience, but that the “why” questions are intractable. If so, we will never know why information processing in brains is attended by a subjective sense of “aliveness,” or even whether this subjective sense is dependent on nervous systems or on any particular complexity of organization. After all, we just take for granted that things like rocks are not conscious because they emit no behavioral signals we would recognize as indicative of intellect.

Leave a Reply

Your email address will not be published. Required fields are marked *