Consciousness speed-dating

datingWhy can’t we solve the problem of consciousness? That is the question asked by a recent Guardian piece.  The account given there is not bad at all; excellent by journalistic standards, although I think it probably overstates the significance of Francis Crick’s intervention.  His book was well worth reading, but in spite of the title his hypothesis had ceased to be astonishing quite a while before. Surely also a little odd to have Colin McGinn named only as Ted Honderich’s adversary when his own Mysterian views are so much more widely cited. Still the piece makes a good point; lots of Davids and not a few Samsons have gone up against this particular Goliath, yet the giant is still on his feet.

Well, if several decades of great minds can’t do the job, why not throw a few dozen more at it? The Edge, in its annual question this year, asks its strike force of intellectuals to tackle the question: What do you think about machines that think? This evoked no fewer than 186 responses. Some of the respondents are old hands at the consciousness game, notably Dan Dennett; we must also tip our hat to our friend Arnold Trehub, who briefly denounces the idea that artefactual machines can think. It’s certainly true, in my own opinion, that we are nowhere near thinkng machines, and in fact it’s not clear that we are getting materially closer: what we have got is splendid machines that clearly don’t think at all but are increasingly good at doing tasks we previously believed needed thought. You could argue that eliminating the need for thought was Babbage’s project right from the beginning, and we know that Turing discarded the question ‘Can machines think?’ as not worthy of an answer.

186 answers is of course, at least 185 more than we really wanted, and those are not good odds of getting even a congenial analysis. In fact, the rapid succession of views, some well-informed, others perhaps shooting from the hip to a degree, is rather exhausting: the effect is like a dreadfully prolonged session of speed dating: like my theory? No? Well don’t worry, there are 180 more on the way immediately. It is sort of fun to surf the wave of punditry, but I’d be surprised to hear that many people were still with the programme when it got to view number 186 (which, despairingly or perhaps refreshingly, is a picture).

Honestly. though, why can’t we solve the problem of consciousness? Could it be that there is something fundamentally wrong? Colin McGinn, of course, argues that we can never understand consciousness because of cognitive closure; there’s no real mystery about it, but our mental toolset just doesn’t allow us to get to the answer.  McGinn makes a good case, but I think that human cognition is not formal enough to be affected by a closure of this kind; and if it were, I think we should most likely remain blissfully unaware of it: if we were unable to understand consciousness, we shouldn’t see any problem with it either.

Perhaps, though, the whole idea of consciousness as conceived in contemporary Western thought is just wrong? It does seem to be the case that non-European schools of philosophy construe the world in ways that mean a problem of consciousness never really arises. For that matter, the ancient Greeks and Romans did not really see the problem the way we do: although ancient philosophers discussed the soul and personal identity, they didn’t really worry about consciousness. Commonly people blame Western dualism for drawing too sharp a division between the world of the mind and the world of material objects: and the finger is usually pointed at Descartes in particular. Perhaps if we stopped thinking about a physical world and a non-physical mind the alleged problem would simply evaporate. If we thought of a world constituted by pure experience, not differentiated into two worlds, everything would seem perfectly natural?

Perhaps, but it’s not a trick I can pull off myself. I’m sure it’s true our thinking on this has changed over the years, and that the advent of computers, for example, meant that consciousness, and phenomenal consciousness in particular, became more salient than before. Consciousness provided the extra thing computers hadn’t got, answering our intuitive needs and itself being somewhat reshaped to fill the role.  William James, as we know, thought the idea was already on the way out in 1904: “A mere echo, the faint rumour left behind by the disappearing ‘soul’ upon the air of philosophy”; but over a hundred years later it still stands as one of the great enigmas.

Still, maybe if we send in another 200 intellectuals…?

43 thoughts on “Consciousness speed-dating

  1. “Perhaps, though, the whole idea of consciousness as conceived in contemporary Western thought is just wrong? It does seem to be the case that non-European schools of philosophy construe the world in ways that mean a problem of consciousness never really arises. For that matter, the ancient Greeks and Romans did not really see the problem the way we do: although ancient philosophers discussed the soul and personal identity, they didn’t really worry about consciousness. Commonly people blame Western dualism for drawing too sharp a division between the world of the mind and the world of material objects: and the finger is usually pointed at Descartes in particular. Perhaps if we stopped thinking about a physical world and a non-physical mind the alleged problem would simply evaporate. If we thought of a world constituted by pure experience, not differentiated into two worlds, everything would seem perfectly natural?”

    I think this right here is exactly what the problem is and is a natural consequence of supposing the hard problem exists at all. If consciousness and matter exist as two separate ‘realities’ then there is always the question of why either exists in spite of the Universe being totally accountable without the other. The thing is, not separating them does nothing to damage the nature of science. It just opens the flood-gate of inquiry into what the nature of reality actually is.

  2. Is there any reason to take Dennet seriously? I figure he’s just a political pundit nowadays hawking his wares, trying to proselytize his brand of computationalism that in addition to being silly just avoids the implications Bakker and Rosenberg (and guys like Feser have argued as falsifying materialism) have raised about what it means to be eliminativist.

  3. Peter: “Honestly. though, why can’t we solve the problem of consciousness?”

    It seems to me that if we want to solve the problem of consciousness we have to ask “What are the important questions that a theory of consciousness should be able to answer”.
    For me, the most important question to be answered is “How is it that we experience a coherent world all around us when we have no sensory receptors that can detect the space we live in?”

  4. Whenever I see someone asking “Why can’t we solve the problem of consciousness?” I wonder “How do you know we haven’t?”.

    By which I mean, perhaps one of the existing views is correct, and the problem is those who don’t hold the view are just too blind to see it. The thing which distinguishes consciousness from many other such questions is that the objective/subjective distinction means there is no conceivable way to test which view is correct, and so no way to settle the argument.

    Imagine if we were not able to test relativity. Though it was conceived as a rigorous theory based on solid a priori assumptions, I doubt it would have achieved widespread acceptance without the corresponding empirical backup.

    So in my view, we don’t really need more intellectuals or more ideas. We probably already have the answer, we just can’t prove it.

  5. First, wow, you made it through all of that? I kind of forgot that I even tried. But now I’ve recommitted so maybe I’ll make it through it by the end of the month. I argue along the lines of Mark O’Brien and also your note that the advent of digital computers were something of a milestone. The philosophers were never going to implement consciousness empirically. (With what?) So many of the necessary tools: neuroscience, computer science, information theory are so recent to the scene. We barely knew what a neuron was a hundred years ago. Not to dis philosophers, but a closed-form solution of “what is consciousness?” was never much of a possibility and still isn’t. Tomorrow, is someone going to have a Eureka! moment? Unlikely. But we’re still doing pretty well. We toyed around with fantastical notions of flight for hundreds (thousands?) of years before the confluence of tools and ideas actually did it.

  6. pmath325 – I find it difficult to give up thinking in terms of the physical and the non-physical and take a unified view of the world instead: but then I find it difficult to give up thinking in terms of time and space and think about interval instead, as I believe relativity requires, so it could just be my lack of grasp. I need to meditate more, possibly.

    heemz – Don’t know Graham Harman, but clearly I should, so thanks.

    Sci – On Dennett, I think he’s still one of the panjandrums, though perhaps the conversation has, you know, moved on a bit.

    Hunt – I may have skimmed a bit… Yes, I think computers have changed both the question and the kind of answer we expect. We used to ask what the intellectual justification for, say, induction was; now we ask how to build a machine that does it. Sometimes we get better answers, sometimes just clearer problems.

  7. Mark, I think you raise an excellent point. I have argued that the retinoid model of consciousness does solve the problem of consciousness because it explains the fundamental question of how we are able experience a coherent world all around us when we have no sensory receptors that can detect the space we live in. Notice that experiencing the world around us is *what it is like to be conscious*. I should add that the retinoid model also explains/predicts many more previously unexplained conscious experiences. For example, see the publications listed here:

    https://www.researchgate.net/profile/Arnold_Trehub

  8. A possible solution to the mystery is simply that you all have been trying to explain a metacognitive illusion. You have to admit, it’s the ugliest and most parsimonious of all the possibilities. Why assume that humans evolved some extraordinary capacity to cognize themselves as they are rather than as they needed? The brain almost certainly tracks itself in opportunistic ways. This means reflection on consciousness makes due with information adapted to the solution of far, far different problems than ‘What is all this wonder?’ Appearance consciousness is your explanandum. Perhaps it has eluded *formulation* all this time simply because it doesn’t exist.

    Otherwise, my bet is that consciousness has eluded explanation for the same reason the brain has. It’s maddeningly complicated stuff.

  9. No. I think it’s more likely that any explanation cannot possibly escape metaphor. Because it’s something that is so personal to us, the only way to describe it IS in metaphor.

  10. I would also suspect that consciousness is so puzzling because it is always changing so much so that it is always eluding us when we try to define it. Whereas the process of defining something is a narrowing down onto a pinpoint, something that works especially well for physical objects which can be whittled down into smaller “things-of-themselves”, consciousness is procedurally the opposite.

  11. Good fun, thanks Peter!

    We should keep in mind that the very idea of the physical is a representational take on reality, one which is always mediated by experience for each conscious subject. Then the question arises of how experience itself comes to exist within the physicalist framework – how it gets caused or produced or entailed by certain physical goings-on. People want to find, or expect to find, experience as a quasi-physical property or causal product of the physical, which is understandable since we’re conditioned to take physicalism as reality, not a representation of it. But since experience itself isn’t out there in the world it participates in representing (only its physical correlates are out there), I don’t think causal or emergentist or panpsychist explanations will ever pan out.

    What might pan out, imo, is an explanation which makes representation, not the physical, the master concept, from which both the phenomenal and physical can be understood as representational takes on reality. This helps us resist the eliminativist temptation: to say that consciousness, unlike the physical, is just an illusion. No, qualia are just as *real* elements of representation as are the physical parameters that participate in the physical model.

    Don’t ask what *really* exists independent of your models, just know that as a knower you’re stuck within at least two of them, one qualitative (consciousness) and one quantitative (science). It may be a mistake to think one reduces to the other, even though of course we find robust dependency relations between phenomenal experience and brain states.

  12. Tom: “Don’t ask what *really* exists independent of your models, just know that as a knower you’re stuck within at least two of them, one qualitative (consciousness) and one quantitative (science).”

    Well said. We *are* stuck with private 1pp conscious descriptions (qualitative) and 3pp public conscious descriptions (quantitative/science).

  13. Tom: “What might pan out, imo, is an explanation which makes representation, not the physical, the master concept, from which both the phenomenal and physical can be understood as representational takes on reality. This helps us resist the eliminativist temptation: to say that consciousness, unlike the physical, is just an illusion. No, qualia are just as *real* elements of representation as are the physical parameters that participate in the physical model.”

    And yet representation occasions almost as much dispute and confusion as consciousness does, so it seems pretty optimistic to see it as part of the solution as opposed to part of the problem. I’m just curious, Tom, what would it take to convince you the concept needs to be abandoned?

  14. Scott, don’t you see that your own argument has no substance without the concept of representation? If there were no representations in our brains, to what would our words refer?

  15. I think the line ‘if it’s stupid, but it works, it’s not stupid’ can be extended to substance – ‘if it has no substance, but it works, it has substance’

    I don’t think something necessarily has to first make sense to us in order to work. It can work without having first made sense to us. And if the thing called conciousness has an explanation that is really counterintutive, it’s definately not going to make sense to us.

  16. Arnold: “If there were no representations in our brains, to what would our words refer?”

    The question can be expanded: How could we be right or wrong about the world? With reference comes truth, and with truth comes correctness. So, to be clear, your otherwise intuitive question is assuming that ‘ought’ somehow belongs to the book of ‘is.’ No one has come close to figuring out how this could possibly work, which is a big reason why no one can agree on what representations are.

    In other words, a great chunk of the confusion that characterizes cognitive science hangs from your intuitive question. Meanwhile, we know for a fact that our brains systematically covary with their environments, and that they have no way of cognizing the complexities of this covariance, and as such must rely on heuristics that solve neglecting that information.

    Attributing representational content provides an acausal shorthand for our relation to the world. Speaking of reference does plenty of work in plenty of circumstances. I fear the brain isn’t one of them.

  17. Scott: “With reference comes truth..”

    Not so! Truth exists only in abstract formal systems. We are not omniscient, so we can never know the essential nature of reality. The referential images of our language are just the transparent brain representations of the world in which we exist. The only evidence that we have that they conform in some way to the “TRUTH” is that these representations appear to contribute to our survival and occasional flourishing in an uncertain and treacherous world. Our representations are an essential part of our brain’s heuristics.

  18. The referential images of our language are just the transparent brain representations of the world

    I agree with most of your comment, Arnold, but am not sure what to make of this sentence which equates two concepts I can’t quite grasp. Is an example of a “referential image of our language” a referring word, AKA a name? If so, why call it an “image”? A phenomenal image may result from encountering a name, but in what sense does such an image “refer” to the named entity?

    A “brain representation of the world” presumably is some sort of neural structure. For example, I think of language facility in terms of context-dependent dispositions to use words, such dispositions being implemented as sensorimotor structures. One could call such a structure a “representation”, but it isn’t obvious to me that the cost/benefit trade of doing so would be favorable.

    In any event, in what sense is a brain representation “transparent”? Is this just a way of emphasizing that neural structures aren’t 1pp observable?

  19. Scott: “Arnold: So are you saying that representations can’t be wrong?”

    Not at all. Illusions and hallucinations are wrong brain representations. But most of our representations are close enough to the reality of our world to enable us to adapt successfully. On the other hand, careful consideration of the systematics of our illusions can give us very useful information about the design of our cognitive brain mechanisms.

  20. Charles: “Is an example of a “referential image of our language” a referring word, AKA a name?”

    No. I take a referential image to be the image of an object or event to which words refer. It is these images that provide the semantic anchors/tethers of our language. Otherwise language would consist of no more than meaningless marks and sounds.

    “In any event, in what sense is a brain representation “transparent”? Is this just a way of emphasizing that neural structures aren’t 1pp observable?”

    Yes. What is represented is commonly experienced as something out there in the world and not as the activity of one’s brain.

  21. Arnold: “Not at all. Illusions and hallucinations are wrong brain representations.”

    So representations possess the property of evaluability?

  22. Scott: “So representations possess the property of evaluability?”

    Yes they do. Consider the moon illusion in which our brain represents the moon at the horizon as much larger than the moon at it zenith. This phenomenon has been evaluated from at least the time of Aristotle as a very puzzling experience (a conscious representation) that demanded an explanation. It wasn’t until just recently that the moon illusion could be explained as a normal brain event given the innate neuronal mechanisms of our retinoid system.

  23. So, Arnold, do you take the “meaning” of a referring word to be the mental image (or perhaps a family of mental images) associated with that word in a person’s brain?

  24. @Charles,

    Yes, I do. Words are culturally determined events where significantly different expressions can and do refer to the same kinds of mental images. This why a novel can be enjoyed in many different languages.

  25. @arnold: I wonder if you have ever tried psychedelics and if that would change your opinion on what a hallucination is in terms of it being a ‘wrong representation’

  26. pmath,

    The biochemistry of psychedelics puts the brain in a totally different ball park. The hallucinations of “normal” psychopathology are more to the point of “wrong representations”.

  27. Scott,

    Depends on what is to be evaluated/analyzed. Some putative brain mechanisms for evaluating experience have been described in *The Cognitive Brain* (MIT Press 1991). How do you evaluate whether the moon you see at the horizon is really bigger than the moon seen overhead?

  28. @Peter: Interesting that James said that. Is there some context. I ask because he also said this regard to producer vs dualist filter/transmitter models:

    “The theory of production is therefore
    not a jot more simple or credible in itself than any other conceivable theory.
    It is only a little more popular. All that one need do, therefore, if the
    ordinary materialist should challenge one to explain how the brain can
    be an organ for limiting and determining to a certain form a consciousness
    elsewhere produced, is to ask him in turn to explain how it can be an organ for
    producing consciousness out of whole cloth. For polemic purposes, the two
    theories are thus exactly on a par.”

  29. @arnold: No, I just don’t really get what you’re asking. The least the consciousness is like for me?

  30. @Arnold: It’s least like matter – which is nonconscious, supposedly has no sensory qualities, and occupies a spatio-temporal location. Your question makes me recall something Bohm once said about the Implicate Order, that Consciousness was closer than matter to the fundamental state. That said I don’t know if I’ve ever properly grasped anything more than the fringes of what Bohm was getting at.

  31. Isn’t it the fact that consciousness is not a “closed” reflection of the brain structure, but also and essentially a product of the nature of environmental and collective interactions of evolutionary entities (like us), which interactively shape their brain and the type of consciousness they (we) intimately experience: in the developmental stages of the brain and of language acquisition, and in the common adoption of norms and conventions, and in the variety and richness of categories of life-impacting experiences which all fashion consciousness and in the long term govern species evolution. The sharing of deep and meaningful communication between entities (the most advanced of which today for us being language), is then at the same time the engine and manifestation of a degree of “same-levelness” of consciousness between entities, which is not necessarily equal to complete “systemic identity” of individual consciouness (and then as Prof. Trehub said in Edge, machines are far from thinking because they lack a point of view, which would only be achieved by their own collective adaptation and evolution – something which could still happen)

Leave a Reply

Your email address will not be published. Required fields are marked *