Can we, one day, understand how the neurology of the brain leads to conscious minds, or will that remain impossible?

Round here we mostly discuss the mind from a top-down, philosophical perspective; but there is another way, which is to begin by understanding the nuts and bolts and then gradually working up to more complex processes. This Scientific American piece gives a quick view of how research at the neuronal level is coming along (quite well, but with vastly more to do).

Is this ever going to tell us about consciousness, though? A point often quoted by pessimists is that we have had the complete ‘wiring diagram’ of the roundworm Caenorhabditis elegans for years (Caenorhabditis has only just over 300 neurons and they have all been mapped) but still cannot properly explain how it works. Apparently researchers have largely given up on this puzzle for now. Perhaps Caenorhabditis is just too simple; its nervous system might be quirky or use elegant but opaque tricks that make it particularly difficult to fathom. Instead researchers are using fruit fly larvae and other creatures with nervous systems that are simple enough to deal with, but large enough to suggest that they probably work in a generic way, one that is broadly standard for all nervous systems up to and including the human. With modern research techniques this kind of approach is yielding some actual progress.

How optimistic can we be, though? We can never understand the brain by knowing the simultaneous states of all its neurons, so the hope of eventual understanding rests on the neurology of the brain being legible at some level. We hope there will turn out to be functions that get repeated, that firm building blocks of some intelligible structure; that we will be able to deduce rules or a kind if grammar which will let us see how things work on a slightly higher level of description.

This kind of structure is built into machines and programs; they are designed to be legible by human beings and lend themselves to reverse engineering. But the brain was not designed and is under no obligation to construct itself according to regular plans and principles. Our hope that it won’t turn out to be a permanently incomprehensible tangle rests on several possibilities.

First, it might just turn out to be like that. The computer metaphor encourages us to think that the brain must encode its information in regular ways (though the lack of anything strongly analogous to software is arguably a fly in the ointment). Perhaps we’ll just get lucky. When the structure of DNA was discovered, it really seemed as if we’d had a stroke of luck of this kind. What amounted to a long string of four repeated characters, ones that given certain conditions could be read as coding for many different proteins; it looked like we had a really clear legible system of very general significance. It still does to a degree, but my impression is that the glad confident morning is over, and now the more we learn about genetics the more complex and messy it gets. But even if we take it that genetics is a perfect example of legibility, there’s no particular reason to think that the connectome will be as tractable as the genome.

The second reason to be cheerful is that legibility might flow naturally from function. That is, after all, pretty much what happens with organs other than the brain. The heart is not mysterious, because it has a clear function and its structure is very legible in engineering terms in the light of that function. The brain is a good deal more complex than that, but on the other hand we already know of neurons and groups of neurons that do intelligibly carry out functions in our sensory or muscular systems.

There are big problems when it comes to the higher cognitive functions though. First, we don’t already understand consciousness the way we already understand pumps and levers. When it comes to the behaviour of fruit fly larvae, even, we can relate inputs and outputs to neural activity in a sensible way. For conscious thought it may be difficult to tell which neurons are doing it without already knowing what it is they’re doing. It helps a lot that people can tell us about conscious experience, though when it comes to subjective, qualities experience we have to remember that Zombie Twin tells us about his experiences too, though he doesn’t have any. (Then again, since he’s the perfect counterpart of a non-zombie, how much does it matter?)

Second, conscious processing is clearly non-generic in a way that nothing else in our bodies appears to be. Muscle fibres contract, and one does it much like another. Our lungs oxygenate our blood, and there’s no important difference between bronchi. Even our gut behaves pretty generically; it copes magnificently with a bizarre variety of inputs, but it reduces them all to the same array of nutrients and waste.

The conscious mind is not like that. It does not secrete litres of undifferentiated thought, producing much the same stuff every day and whatever we feed it with. On the contrary, its products are minutely specific – and that is the whole point. The chances of our being able to identify a standard thought module, the way we can identify standard functions elsewhere, are correspondingly slight as a result.

Still, one last reason to be cheerful; one thing the human brain is exceptionally good at is intuiting patterns from observations; far better than it has any right to be. It’s not for nothing that ‘seeing’ is literally the verb fir vision and metaphorically the verb for understanding. So exhibiting patterns of neural activity might just be the way to trigger that unexpected insight that opens the problem out…

17 Comments

  1. 1. Lloyd says:

    I would argue that brain function is indeed strongly analogous to software, if you consider deep neural net structures software. And they surely are that. I think we DO understand consciousness more closely than you want to admit, Peter. And finally, if Zombie Twin tells us about his experiences, then he had experiences. But, actually, he’s an impossible contradiction, so it won’t happen.

  2. 2. SelfAwarePatterns says:

    I don’t think there’s any way for us to know whether we can fully understand the brain until we either gain that understanding, or hit some epistemic wall that we can’t surmount after prolonged effort. It seems like the most productive assumption is that we can eventually understand it, since the alternative is to just give up and go home.

    One of the problems with asking if we’ll ever understand consciousness, is that people have different intuitive understandings of what the word “consciousness” means. For many, it’s an ontological thing separate and apart from the processing of the brain. I don’t think we’ll ever gain an understanding of this type of consciousness, because it doesn’t exist. It’s a holdover from substance dualism. Like biological vitalism, it’s a mistaken notion. But as neuroscientist Elkhonon Goldberg observed in his book on the frontal lobes, “Old gods die hard.”

    On the other hand, if we want to understand affects, emotion, perception, attention, imagination, motor control, introspection, and symbolic thought, what Chalmers calls the “easy problems”, it’s hard to see that not eventually happening. The question is whether there will be anything still missing after it does.

  3. 3. Paul Topping says:

    Rather than look for something analogous to software, I think it is more important that the brain’s function and anatomy be understandable at various independent levels. If, on the other hand, every mechanism is tangled up with every other mechanism, it is going to be really hard to decipher.

    Looking at the software analogy issue from another direction, there is much to mine in the nature/nurture divide. Looking at what brain functionality is inherited vs what must be obtained after conception should tell us a lot. The latter must be represented in structures and/or chemicals that change.

  4. 4. Jayarava says:

    Our gut is also lined with neurons. Last estimate I saw was ca 100 million for humans. About the size of a cat brain. So that the gut does clever things, is not entirely surprising. Of course the gut is also in a symbiotic relationship with gut microbiota which we now know play a role in regulating certain elements of homoeostasis.

    This suggest that understanding the brain will not be enough on its own. The failure to successfully model the brain of C. elegans, except on the macro scale, may well be related to this (though my understanding is that it’s early days in the sense that the complete synapse map was only completed a few years ago). The brain is a system within a system. If the gut and its symbiotes are also involved in homoeostasis then understanding the brain is insufficient to account for homoeostasis (the most basic function). This makes it highly likely that just understanding the brain will be insufficient for other functions and/or behaviours as well.

    What if it turns out that homoeostatic processes outside the brain are in feedback loops that include the brain (but are not driven by the brain)?

    Also C. elegans is a community of around 2000 cells (from memory it’s about 300 neuron, 700 gonad, and 1000 muscle – something like that). All eukaryote cells are also symbiotic communities and of are capable of quite sophisticated behaviour on their own. We’ve made great strides in understanding intra-cellular processes, but there is a long way to go before we fully understand how a cell works.

    In a very small creature, like C. elegans, perhaps those cellular processes has a disproportionately large effect on the organism compared to a very large creature like us. The action of one cell amongst trillions is insignificant, but one amongst 2000 may not be. What is it like to be a creature whose gonad cells outnumber brain cells 2:1? Are we really assuming that gonad cells play *no part* in homoeostasis or behaviour?

    In terms of neuroscience there is a gross bias towards metaphysical or ideological reductionism. And the brain is a complex system with emergent properties. It is axiomatic in biology that dissection can only tell you so much about your organism. You have to observe it *living*. It’s all very well freezing the little buggers and mapping the brain of a dead one, but what does a *living* C. elegans brain look like, and how does it behave? What is each of those neurons doing? They are not simple transistors by any means! After all they evolved from fully viable single-celled organisms!

    I think we’re still defining the scope of the problem and realising the limits of reductionism. The fact that we are still talking about brains in isolation is a symptom of this. The brain is part of a system. Take away the system and the brain stops working. So much for reductionism when studying structure (though of course we still need it to study and understand substance).

    In terms of scope, when is the fact that an isolated brain is a *dead* brain, going to force us to think of the organism-as-a-whole being the structure which instantiates conscious mental phenomena? Fantasies about brains in vats and Boltzman brains and so on don’t help. The only *real* brains we know about are all completely integrated into organisms.

    BTW. No brain is anything like any computer. Or the computer models of C. elegans would have immediately succeeded. Whereas all attempts to model the brain as a computer have utterly failed. Any optimism on this score now is religion rather than science (in that belief defies evidence).

  5. 5. Lloyd says:

    I do not say that today’s deep neural net structures adequately model the brain. I do say that they represent a kind of software that will be expanded in ways that will succeed in capturing the essence of “how the brain works”.

    To my mind, there are two basic elements of consciousness. The “real-time” element is the instantaneous sense of being in the world, the awareness that there is a world out there. The second element is the sense of who I am, of what it is that is in here looking out. That part is primarily due to memories of what the world has been like in the past and consists of my putting together in real time a sense of what it means to me to be in the world, of what I bring to the moment that makes me who I am.

    The first element, in my opinion, will be experienced by the first machine that has the ability to perceive the world in a way that integrates the perception of self in the world. The second element depends entirely on the range of memories of previous world interactions that are available at the moment of perception.

  6. 6. Lloyd says:

    Kurzweil talked about wanting to “upload” his consciousness into a machine. That concept makes no sense with respect to the first element of consciousness. Whatever perception mechanisms the “machine” in question has, those mechanisms provide the first element. There is no transplanting this. One the other hand, memories could, in theory, be copied from one organism to another and could then provide the data needed to implement the second element of consciousness.

  7. 7. Hunt says:

    The brain is highly structured, which indicates that eventually it will be conceptually broken down and understood. If it were just a homogeneous mass of cells, that would be another matter.

    There’s almost always a hierarchical structure to biological systems, which is quite amazing when you think of it. I’m reminded how certain evolutionary computer programs, when set to create electronic circuitry, invariably create a jumbled mess of wires and components. These often perform a function better than structured circuits, but they are conceptually impenetrable. They might use 8 components to create an amplifier, wired together in bizarre fashion, while human designed circuits use 15. A critical insight is that it’s actually the constraints and restraints set by the conditions of evolution that create the need for structure.

    I note that actually the course of understanding the brain isn’t all that different from how other organs were understood. We know about its high level structure, about regions that deal with one function or another. We know about it’s low level function, how neurons transmit impulses. It’s the middle layer that remains elusive. This seems to be the normal course. Ancient investigators generally knew what the heart was for; they knew about blood, of course. They didn’t understand the details of blood chemistry, or the mechanisms of how blood nourishes tissue.

    This can be illustrated even more clearly with kidney function. The ancients probably knew the kidneys had some function in cleaning the blood; they knew they produced waste product (urine). They knew about its basic anatomy. They didn’t have a clue about its physiology or anything about its acid/base chemistry.

    Understanding the brain seems to be following the same course. The middle layer is the trick.

  8. 8. David Duffy says:

    Specifically for the C. Elegans example, Roberts et al 2016
    https://elifesciences.org/articles/12572

    Although the neural circuitry for local search has been described in considerable detail, our understanding of the system remains limited, partly for lack of key physiological data…anatomical data do not specify the signs or strengths of synaptic connections. A quantitative model that incorporates physiological properties of the command neurons and their synaptic connections is needed…experimental data are insufficient for creating a neuron-by-neuron model of the command network that incorporates biophysical details such as synaptic and membrane conductances without introducing a heavy load of unconstrained parameters. Nor would such a mechanistically detailed model necessarily provide the appropriate level of abstraction in which to intuitively understand C. elegans search behaviors.

    Of course, they go on to present such a model.

    For a neural network model, one could argue that synaptic weights are the closest thing to software.

  9. 9. Lloyd says:

    Understanding the brain is not about understanding neurons. It’s about understanding the relationship between intellect and emotions.

  10. 10. arnold says:

    Understanding a brain is no different than–understanding is materiality and is also consciousness…
    …For some of us, Lloyd for instance, seem to be saying philosophy is for development of a self, I agree…

    Is more work or effort needed than just a thoughtful remembrance of a self…
    …For oneself to begin understanding/consciousness and ourselves are different interactions in this Cosmos…

  11. 11. John Davey says:

    Peter

    Can we put zombies into the bed they can’t sleep in ? If zombies aren’t conscious, then their experience can’t include conscious experiences which makes them epistemogically different to non-zombies. Their memories have different contents.They are not parallel. Not comparable.

    Their existence even in a theoretical sense demonstrates nothing. Forget about them !

    And please don’t suggest that both zombies and non-zombies have the same causal apparatus. If they did they’d both have the same consciousness level, unless brains are magic.

    Regs
    Jbd

  12. 12. John Davey says:

    Peter

    On the general point you are right. If nematodes are too complicated, we shouldn’t even be thinking about human brains. It’s all wasted effort.

    And thats a shame, because it has led to the creation of industries dedicated to avoiding the hard scientific work – the bloated and wholly useless cognitive science caper.

    But try telling this generation of scientists that they will only be creating facts that might not be useful for 100 years and they lose interest. You can see the temptation of computationalism as a cheap and nasty shortcut – but ultimately wasting effort on it will only slow down things further.

    Jbd

  13. 13. Jayarava says:

    Hmm. I commented, but it has not appeared.

    In any case re C elegans I wanted to highlight this news: “Worm atlas profiles gene readouts in every cell type in the animal.” https://phys.org/news/2017-08-worm-atlas-profiles-gene-readouts.html

    The idea that no progress is being made in this field is clearly wrong. It’s just that with each new discovery there is massive scope creep as we realise that we didn’t understand the problem we were trying to solve.

    For example, neurons are not monolithic, but come in dozens of types. Again, the prevailing reductionism wants to collapse the category of neuron to a single simple type, which leads to nonsense.

    Other recent research suggests that all cells are capable of inter-cellular communication and that this is especially important in cancers (for which the C elegans research has provided some important insights). https://medicalxpress.com/news/2017-05-elegans-solution-worm-genetic-screen.html#nRlv

    On the other hand I see no reason to throw up our hands in despair. Organisms are complex enough to have emergent behaviours at multiple levels that can only be fully understood at their own level. So of course we have to study organisms are different levels: from macro-molecules to societies and everything in between. At this level of complexity and organisation causation in all directions is possible.

    BTW I agree with John Davies: Discussing zombies is an example of philosophers wasting everyone’s time.

  14. 14. Peter says:

    Sorry the earlier comment failed to appear, Jayarava: I don’t know what went wrong.

  15. 15. Callan S. says:

    I think the zombies are a good example, because they’d report themselves having consciousness – there’d be nothing about them that lets them know this idea they don’t have consciousness.

  16. 16. John Davey says:

    Callan

    “I think the zombies are a good example, because they’d report themselves having consciousness – there’d be nothing about them that lets them know this idea they don’t have consciousness.”

    Why would they report consciousness if they don’t have it ? Would a group of men who couldn’t smell start talking about coffee aromas ? if they’ve never had it and have no exprience of it, why would it arise in casual zombie conversation (the last three words indicating how farcical this is) ?

    J

  17. 17. John Davey says:

    Jarava

    I agree with you.

    “For example, neurons are not monolithic, but come in dozens of types. Again, the prevailing reductionism wants to collapse the category of neuron to a single simple type, which leads to nonsense.”

    Neural nets were part of a computer programming technique that wasn’t necessarily meant to be be taken that seriously as a model of biological artifacts, just as an alternative to conditional logic, and a more nuanced way of matching inputs to outputs. What has caught on though, is the notion that the simple mathematical model of a neuron (principally a way of softening conditional logic) has been adopted almost universally as meaning that the real thing is the same, a simple input-output node and that’s all there is. Neurons and “signals”.So everything is focussed on ‘neurons’ to the expense of everything else.It’s staggering really. A case of the tail wagging the dog, something that characterises this area.

    JBD

Leave a Reply