Minds, Matter and Mechanisms

Will the mind ever be fully explained by neuroscience? A good discussion from IAI, capably chaired by Barry C. Smith.

Raymond Tallis puts intentionality at the centre of the question of the mind (quite rightly, I think). Neuroscience will never explain meaning or the other forms of intentionality, so it will never tell us about essential aspects of the mind.

Susanna Martinez-Conde says we should not fear reductive explanation. Knowing how an illusion works can enhance our appreciation rather than undermining it. Our brains are designed to find meanings, and will do so even in a chaotic world.

Markus Gabriel says we are not just a pack of neurons – trivially, because we are complete animals, but more interestingly because of the contents of our mind – he broadly agrees that intentionality is essential. London is not contained in my head, so aliens could not decipher from my neurons that I was thinking I was in London. He adds the conceot of geist – the capacity to live according to a conception of ourselves as a certain kind of being – which is essential to humanity, but relies on our unique mental powers.

Martinez-Conde points out that we can have the experience of being in London without in fact being there; Tallis dismisses such ‘brain in a vat’ ideas; for the brain to do that it must have had real experiences and there must be scientists controlling what happens in the vat. The mind is irreducibly social.

My sympathies are mainly with Tallis, but against him it can be pointed out that while neuroscience has no satisfactory account of intentionality, he hasn’t got one either. While the subject remains a mystery, it remains possible that a remarkable new insight that resolves it all will come out of neuroscience. The case against that possibility, I think, rests mainly on a sense of incredulity: the physical is just not the sort of thing that could ever explain the mental. We find this in Brentano of course, and perhaps as far back as Leibniz’s mill, or in the Cartesian point that mental things have no extension. But we ought to admit that this incredulity is really just an intuition, or if you like, a failure to be able to imagine. It puzzles me sometimes that numbers, those extensionless abstract concepts, can nevertheless drive the behaviour of a computer. But surely it would be weird to say they don’t, or that how computers do arithmetic must remain an unfathomable mystery.

29 thoughts on “Minds, Matter and Mechanisms

  1. The emerging consensus theory at Canonizer.com known as “Representational Qualia Theory” (https://canonizer.com/topic/88-Representational-Qualia/6#statement) has an updated version that recently went live. It includes a new falsifiable definition of consciousness and things like intentionality as “Computationally Bound elemental physical qualities or qualia”.

    Perhaps I don’t fully understand what some people mean by “intentionality”. Could a self-driving Tesla Car “Intend” on driving me home, while it does exactly that? Representational qualia theory predicts our intentionality is similar. The differences being, a computer’s “intentionality” is abstract, or devoid of any phenomenal qualities. While our intentionality is made out of computationally bound elemental causal physical qualities or qualia.

    Would love to hear your thoughts on this. Do you agree? Disagree?

  2. Yes we need to talk, but at times even explanation could be-is all optics…in front of observation…

  3. I think as you (Peter) recognize, intentionality is the key concept here, and I think the great disconnect happens when you think intentionality does not have a physical explanation.

    In fact, (I think), intentionality does have a physical explanation, but having access to a particular physical system, in this case the brain at any one moment, does not provide enough information to derive intention. What you need to generate an explanation of intention is the physical/causal history of that system.

    So, the example which comes to my mind is of a campground where camping sites are indicated by numbered posts near trails, and the tents are far enough down the trail that they cannot be seen from the post. You might come across a numbered post with a distinctive coffee mug sitting on top of it. Just by examining the mug you can have no idea whether there is an intention associated with it (“Hey late comers, we’re here! Follow this trail!”) or whether someone simply finished their coffee, put the mug in a convenient spot, and forgot about it.

    So to bring this to the video discussion, you can think about London while having no part of London in your head, but you cannot think about London if there never was a physical London and no one ever spoke the word “London” to you. You could make up a fictional place like Xanadu, but the “making up” has a neural causal history, and you could not think about it without first making it up.

    In the end, I think that neuroscience is not going to generate this epiphany of intention. That’s what philosophy is for. But once you have this understanding, everything will come down to neurons interacting with neurons, which of course includes my neurons interacting with your neurons via media such as written words, spoken words, gestures, etc.

    *
    [yes we did the camping thing, but the object was not a coffee mug, it was some other object that was identifiable to our group. But being from Seattle, I’m all about the coffee.]

  4. I think it simplifies things to realise that what the subconscious brain does is control. What the conscious brain does is control of control, where its raw materials are the control processes of the subconscious brain (not things external to the brain). I think this matches up with what 3. James says. The control processes only have any usefulness (meaning if you like) because they set up an appropriate, useful relationship with the external world.

  5. They talk of the binding problem as unsolved, i.e. how things detected in different parts of the brain are associated, but this is just about relationships and an attentional scheme of referencing.

    An aspect of this is timing: consciousness can only be said to exist not instantaneously, but over a duration of at least a cognitive cycle of a few 100s of milliseconds, which is the time it takes for each part of the brain to potentially interact with each other part so that the brain has a coherent overall state, and acts as a whole. In the instant it is just a bag of bits. Physics doesn’t work in terms of complex wholes, it works with infinitesimal bits of stuff and their interactions.

    The mind depends on particular patterns of materials being significant to future outcomes…a bit like the ‘iai’ symbol behind them on the stage…it is completely accounted for by physics, but its meaning only emerges when it has an effect on a pattern and a process in a brain that knows what to do with that pattern!

  6. I’m not sure I agree numbers have no extension. We create them based on observations of the physical world. The “natural numbers” are so-called for good reason. Geometry, likewise, is abstract, but is also a distillation of physical reality. Circles and angles are everywhere!

    Just consider the importance of the number of protons in any given atomic nucleus.

  7. Computers accomplish “binding” via the CPU. For example, 2 abstract numbers are computationally bound together, in a very minimal way, when they are loaded into registers and then a comparison operation, like subtraction, is done on them. It takes lots of such sequential steps like this, to accomplish anything like the amount of computational binding our brain achieves in consciousness. “Representational Qualia Theory” (see: https://canonizer.com/topic/88-Representational-Qualia/6#statement) is predicting that Tononi’s Integrated Information Theory measures the amount of computational binding going on. (see the “Unification of Many Theories” section)

  8. I’d agree that intentionality is crucial, but I always feel like I must be missing something when someone talks about it being a great mystery that can never be solved by neuroscience. Cognitive neuroscience is very far from a full accounting of the causal chain, but if you read about what is known about the sensory systems, particularly the visual one, you quickly get the sense that this is a matter of time and persistence.

    So it probably isn’t a surprise that my sympathies are with Martinez-Conde. Tallis actually came across to me as dogmatic. History hasn’t been kind to people who say science will never learn X.

  9. I agree. Martinez-Conde! You go girl.

    But bridging the explanatory gap is a bit more than time and persistence. Had our ancestors, and now us, simply been less sloppy with their scientific usages of names for specific physical qualities like “red”, we would have been much further along in our understanding of the subjective qualities of consciousness. Why has nobody noticed that all objective abstract information describing physics is devoid of any qualitative information? The word red, isn’t physically red. You need to know which physical quality it is a label for. Why does nobody rigorously ask: “Which physical quality?”

    And as long as we continue to fail to make what should be the obvious connection between devoid of quality abstract objective information describing the stuff in our brain, and our direct subjective qualitative awareness of the that same stuff, we will, like Jackson’s Mary, remain “qualia blind”. This could remain true for all time, no matter how much we learn, abstractly, about the physics of the brain.

    For more information, see this draft (would love any feedback) of a 460 word abstract we are about to submit to some calls for papers: https://docs.google.com/document/d/1IsSkbYT02KUAH1Z1BA-X7WD4gcvNFoNSw3JnlVkFLSk/edit?usp=sharing

  10. I think the challenge of intentionality relates to the interest-relative investigation that consciousness allows.

    At first glance it seems we would need consciousness, specifically the intentional…property? ability? of consciousness, to explain intentionality. Popper and Putnam recognized this difficulty, & Tallis gets into it in his latest book Of Time & Lamentation.

    I suppose one could say this is merely incredulity, but it seems then on the flip-side the hope of reductionist resolution is also merely an article of faith.

    “What would a philosophy look like that had given up on all reductionist dreams?”
    -Hilary Putnam, Renewing Philosophy

  11. Sci, I don’t understand the difficulty. Yes you need consciousness to explain intentionality. You also need a paintbrush to paint a paintbrush. I don’t see a problem.

    *

  12. You don’t need a paintbrush to explain a paintbrush, you can explain it using interest-relative causes.

    The problem is more like trying to find a logical explanation for logic’s authority.

  13. Observation Qualia Quanta…

    How about…observation is neither the thing itself or description of the thing itself in research; it is neutral as qualia and quanta are either active or passive in research…

  14. Computers don’t ‘do’ arithmetic any more than an abacus. They are tools that assist in the practice of arithmetic using methods specifically created for that end.

    To suggest that saying the brain is not explicable in physical terms is equivalent to saying that we can’t explain how computers ‘do’ arithmetic (they dont – we do) is to make the mistake of assuming that, like brains, computers are objects of scientific mystery. They aren’t. They are engineered objects of entirely defined scope and thus by definition possessing no mysteries to ponder or surprises to discover.

    Brains and minds are in the unique position of not only being objects of scientific interest, but like all biological objects being characterised by non–physical features. Notions like gender, sexual reproduction, natural selection are biological ideas and not reducible to physics – nobody has a problem with that, but suggest that the brain’s irreducible mind properties are biological and cant be reduced to physical terms and some people go into a frenzy of the Great Physics Defence, usually hurling a few insults about irrationality. But the difference between the physical and mental is ontological so such a simple reductive link will never, ever happen.

    That doesn’t mean to say of course that correlative links wont yield some kind of progress. But derive the mental from the physical ? Self-evidently impossible

    Jbd

  15. 16…Isn’t also then, ‘evidence of oneself’ becomes possible…
    …derived-transformed from what is given or taken…

    Furthering our understanding of intentionality in nature…

  16. Quality measures Quantity
    …Being Emergence vs. Pattern Emergence: Complexity, Control, and Goal-Directedness in Biological Systems…
    Jason Winning & William Bechtel; In Sophie Gibb, Robin Hendry & Tom Lancaster (eds.), The Routledge Handbook of Emergence. London: pp. 134-144 (2019), PhilPapers

    Abstract: Emergence is much discussed by both philosophers and scientists. But, as noted by Mitchell (2012), there is a significant gulf; philosophers and scientists talk past each other. We contend that this is because philosophers and scientists typically mean different things by emergence, leading us to distinguish being emergence and pattern emergence. While related to distinctions offered by others between, for example, strong/weak emergence or epistemic/ontological emergence (Clayton, 2004, pp. 9–11), we argue that the being vs. pattern distinction better captures what the two groups are addressing. In identifying pattern emergence as the central concern of scientists, however, we do not mean that pattern emergence is of no interest to philosophers. Rather, we argue that philosophers should attend to, and even contribute to, discussions of pattern emergence. But it is important that this discussion be distinguished, not conflated, with discussions of being emergence. In the following section we explicate the notion of being emergence and show how it has been the focus of many philosophical discussions, historical and contemporary. In section 3 we turn to pattern emergence, briefly presenting a few of the ways it figures in the discussions of scientists (and philosophers of science who contribute to these discussions in science). Finally, in sections 4 and 5, we consider the relevance of pattern emergence to several central topics in philosophy of biology: the emergence of complexity, of control, and of goal-directedness in biological systems.

  17. Today neurosciences have some difficulties to explain human meaning or intentionality. But meaning and intentionality exist also at animal level (ref Searle) where they are much easier to address.
    Why not take an evolutionary approach and consider first a possible animal version of human mental states, and then look at how they could introduce human ones?

  18. No doubt we’ll learn something about Intentionality by examining animals – for example the research suggesting bees understand the concept of zero.

    But it seems easy enough to explain signaling (one bird ‘screams’, the flock flees) via evolution because there’s no need for the animals to have aboutness. (This isn’t to say they don’t just that it is plausible there’s no meaning.)

    Not that we shouldn’t try, I don’t think anyone – including Tallis – is advocating neuroscience just give us…but we may also simply lack the kind of brains that can figure out these problems to everyone’s satisfaction…

  19. @ Christophe: Admittedly it’s a question for continued research, but I am less convinced by a mouse sighting a cat example than I am on the possibility that bees comprehend zero.

    Do mice have thoughts and propositions about the presence cats, or do they simply have wiring in their brain activated by certain sights/smells that correspond with felines?

    I’m not saying it’s definitively the latter, but I don’t think simply observing the mice and cats could answer the question.

  20. That the phenomenon of observation may not be to provide answers…
    …asking why we see from what we see, could lead to more seeing…

    Conflate quality with quantity for intentionality…

  21. @ Sci: Mice do not have human type thoughts and propositions when facing a danger. For animals it may indeed be more about a brain wiring that acts on a quasi reflex mode, but it is still an “aboutness”. The cat means “danger” for the mouse, like a fast car getting close means “danger” for us. In both cases the intentionality/meaning is relative to a “stay alive” constraint to which all living entities are submitted. And humans are submitted to constraints that animals do not know like “look for happiness”.
    An evolutionary approach is bottom-up. We humans have reflex mode actions, like animals have. But human type thoughts and propositions may not exist for animals.
    Perhaps the evolution of intentionality can be looked at in terms of evolution of constraints

  22. “Will ‘the mind’ ever be fully explained by neuroscience?” Go deep into the annals of neuro-psychological syndromes. There is not one category of alleged human psychological uniqueness that doesn’t have its deficits, aberrations or absence. This surely tells us something about the explanatory sufficiency of neuroscience.

  23. Peter says: “It puzzles me sometimes that numbers, those extensionless abstract concepts, can nevertheless drive the behaviour of a computer.”…So I searched–‘numbers, those extensionless abstract concepts’–and it seems possible our host is also interested in Basil Hiley’s and David Bohm’s work with quantum potentials…that there is no possibility of homelessness because of non-locality in a quantum universe (that grounding itself is spontaneous there-here); this proposes a solution about ‘black holes’ actually being “black fields”, with the potential then for fathomable mysteries in our cosmos…

  24. Hmmm, numbers and black-holes. Curious sort of take on the matter of consciousness and human mentality.The explanatory sufficiency of neuro-psychology has always been enough for me.

Leave a Reply

Your email address will not be published. Required fields are marked *