Bridging the Brain

How can we find out how the brain works? This was one of five questions put to speakers at the Cognitive Computational Neuroscience conference, and the answers, posted on the conference blog, are interesting. There seems to be a generally shared perspective that what we need are bridges between levels of interpretation, though there are different takes on what those are likely to be and how we get them. There’s less agreement about the importance of recent advances in machine learning and how we respond to them.

Rebecca Saxe says the biggest challenge is finding the bridge between different levels of interpretation – connecting neuronal activity on the one hand with thoughts and behaviour on the other. She thinks real progress will come when both levels have been described mathematically. That seems a reasonable aspiration for neurons, though the maths would surely be complex; but the mind boggles rather at the idea of expressing thoughts mathematically. It has been suggested in the past that formal logic was going to be at least an important part of this, but that hasn’t really gone well in the AI field and to me it seems quite possible that the unmanageable ambiguity of meanings puts them beyond mathematical analysis (although it could be that in my naivety I’m under-rating the subtlety of advanced mathematical techniques).

Odelia Schwartz looks to computational frameworks to bring together the different levels; I suppose the idea is that such frameworks might themselves have multiple interpretations, one resembling neural activity while another is on a behavioural level. She is optimistic that advances in machine learning open the way to dealing with natural environments: that is a bit optimistic in my view but perhaps not unreasonably so.

Nicole Rust advocates ‘thoughtful descriptions of the computations that the brain solves’. We got used, she rightly says, to the idea that the test of understanding was being able to build the machine. The answer to the problem of consciousness would not be a proof, but a robot. However, she points out, we’re now having to deal with the idea that we might build successful machines that we don’t understand.
Another way of bridging between those levels is proposed by Birte Forstmann: formal models that make simultaneous predictions about different modalities such as behavior and the brain. It sounds good, but how do we pull it off?

Alona Fyshe sees three levels – neuronal, macro, and behavioural – and wants to bring them together through experimental research, crucially including real world situations: you can learn something from studying subjects reading a sentence, but you’re not getting the full picture unless you look at real conversations. It’s a practical programme, but you have to accept that the correlations you observe might turn out complex and deceptive; or just incomprehensible.

Tom Griffiths has a slightly different set of levels, derived from David Marr; computational, algorithmic, and implementation. He feels the algorithmic level has been neglected; but I’d say it remains debatable whether the brain really has an algorithmic level. An algorithm implies a tractable level of complexity, whereas it could be that the brain’s ‘algorithms’ are so complex that all explanatory power drains away. Unlike a computer, the brain is under no obligation to be human-legible.

Yoshua Bengio hopes that there is, at any rate, a compact set of computational principles in play. He advocates a continuing conversation between those doing deep learning and other forms of research.
Wei Ji Ma expresses some doubt about the value of big data; he favours a diversity of old-fashioned small-scale, hypothesis-driven research; a search for evolutionarily meaningful principles. He’s a little concerned about the prevalence of research based on rodents; we’re really interested in the unique features of human cognition, and rats can’t tell us about those.

Michael Shadlen is another sceptic about big data and a friend of hypothesis driven research, working back from behaviour; he’s less concerned with the brain as an abstract computational entity and more with its actual biological nature. People sometimes say that AI might achieve consciousness by non-biological means, just as we achieved flight without flapping wings; Shadlen, on that analogy, still wants to know how birds fly.

If this is a snapshot of the state of the field, I think it’s encouraging; the outlooks briefly indicated here seem to me to show good insight and promising outlooks. But it is possible to fear that we need something more radical. Perhaps we’ll only find out how the brain works when we ask the right questions, ones that realign our view in such a way that different levels of interpretation no longer seem to be the issue. Our current views are dominated by the concept of consciousness, but we know that in many ways that is a recent idea, primarily Western and perhaps even Anglo-Saxon. It might be that we need a paradigm shift; but alas I have no idea where that might come from.

13 thoughts on “Bridging the Brain

  1. Hi Peter,

    Could you suggest some reading regarding “in many ways that is a recent idea, primarily Western and perhaps even Anglo-Saxon.” Sounds fascinating!

    In most computer engineering, “algorithms” are separate from data. For example you can use the same Machine Learning algorithm to predict Netflix customer choices, or to predict quality control in a manufacturing process. But in the human brain, the general consensus is that there is no sharp division between algorithms and data, either by brain region or by neural mechanism. This seems like just as important a worry about Griffiths’s approach as the one you mention.

  2. Paul,

    The relative novelty of our idea of consciousness is an interesting point and I should probably make an effort to dig out some proper sources and write it up a bit, but I don’t have any good reading to recommend at the moment.

    Although philosophically discussion of the mind goes back a long way, I don’t think consciousness really became central until recently. You can quote William James on the subject, and the behaviourists mentioned it as the thing they particularly didn’t want to talk about, but I think most philosophy of mind remained centred on topics like the ‘mind-body problem’ and personal identity until computers became salient and we needed something that summed up what they hadn’t got.

    I’ve been told that in Indian and Chinese philosophy the question of consciousness just doesn’t really arise, and I don’t find it in Ancient Greek discussions either. Calling it specifically Anglo-Saxon might be a stretch, but in Romance languages one word tends to mean both ‘consciousness’ and ‘conscience’ (and maybe even ‘confession’) which seems to build in ideas of morality and introspection. In German, it seems we have a different problem, with no word for ‘mind’, at least according to Marcus Gabriel.

    That seems such an unexpected claim, given that Brentano defined the mental, in terms of intentionality, I had to go and look up what German word he used. ‘Psychische‘, it turns out. In passing he just mentions the word ‘mentale’ as another term used by medieval scholastics for intentional phenomena. It’s rather hard for me to assimilate the idea that none of the great German psychologists ever had a word that directly equated to ‘mind’, presumably making do with the equivalent of ‘psyche’ instead…

  3. I personally hope all of the proposed paths can be pursued in parallel, with each taking the insights from the other as they go. We can’t know which will be fruitful.

    I’ve increasingly become uneasy about trying to scientifically understand consciousness in and of itself. The problem is that, even within western culture, the term “consciousness” is amorphous. Without a precise definition, whatever science finds, there will always be someone saying, “No, no, you haven’t actually explained the issue yet,” with the goalpost constantly being shifted around.

    It seems like a better strategy is to understand more tractable capabilities, such as exteroception, interoception, affects, emotions, attention, imagination, introspection, motor control, memory, etc. Obstacles encountered in one component may eventually be overcome from insights from studying the other components.

    I’d also love to see a write-up about the cultural scope of the consciousness concept. It seems like other cultures such as China or India must have some word for the state of being awake and aware, but maybe without our particular dualistic tradition, they haven’t worked themselves up about it the way we have.

  4. We in computer science often blur the distinction between algorithms (code) and data. For example, programming language compilers are algorithms that take other algorithms as input. The famous “halting problem” is about algorithms acting on algorithms. We mostly avoid mixing our algorithms with data for security reasons and to be able to wrap our heads around algorithms. Self-modifying code is easy to create but extremely difficult to understand and control.

    Geoffrey Hinton recently lamented that neural networks worked with extremely simply “neuron” models and that only just recently were researchers starting to explore the space of possible neuron-level algorithms. The Big Data AI is largely independent of this work. For the most part, they are still working with these early neural network models. Their recent success has more to do with bringing massive computer horsepower to bear on massive data sets than new algorithms.

    I believe the exploration of neural models and how groups of neurons might work together is going to be a very fruitful research avenue. For example, if each neuron is a goal-seeking creature in its own right, what exactly are its goals? What happens when we connect a lot of them together. This is a science we have only just begun to explore.

  5. Perhaps non Anglo-Saxon philosophy had a notion of consciousness but did not find it problematic or interesting- most cookbooks give little attention to tables but tables are clearly important to dining

  6. When we talk about levels are we talking about levels within the brain or merely about levels of interpretation? If the levels don’t exist within the physical structure of the brain why do we think the levels we perceive are likely to be explanatorily useful? I’m inclined to think that understanding of the brain as a biological organ within the body is the place to start. Who knows, by the time we really understand the brain as an organ within an organism there might not be much else left to explain. (And I know I questioned the usefulness of the term ‘organism’ in response to a previous post, but humor me.)

  7. SelfAware


    I’ve increasingly become uneasy about trying to scientifically understand consciousness in and of itself. The problem is that, even within western culture, the term “consciousness” is amorphous.

    Consciousness means the same thing the world over. It’s irreducible, that’s all – the difference is that in western culture that’s a problem, because only space and time are meant to be irreducible because this is the basis of physics. In fact some angelic so-and-so’s don’t even accept that space and time are irreducible – although such people are usually incapable of providing even the beginning of a response to the question ‘well what exactly could space and time reduce to, .. except space and time?’ In other words, the irreducibility of space and time is a basic, cognitive fact like the irreducibility of consciousness.

    Neither space and time and consciousness are any less useful – or , and this is the main point – any less understood for being irreducible. What they ‘mean’ or ‘consist of’ isn’t a question likely to ever go anywhere.

    You know what consciousness is and I know what it is an any member of homo sapiens knows what it is but the inability to “define” it is not just not a problem, it’s actually inevitable – even desirable, as it represents a a solid basis of pure ignorance from whence to start research – just like physics does, which remains totally ignorant of the nature of space and time and functions perfectly well regardless.

    J

  8. John,
    I have to admit I’ve never understood the assertion that consciousness is simple. I can understand the assertion from someone who hasn’t done much reading on the subject, but I’m baffled when it comes from people who have. It makes me wonder what I’m missing.

    On consciousness being irreducible, I actually take myself to have listed several components of consciousness in the paragraph after the one you responded to, which of course implies at least some reduction. I suspect you’ll tell me that those aren’t components of consciousness, which gets us back to the main issue of ambiguity I mentioned.

    I do admit that qualia are *subjectively* irreducible. But their neural correlates seem to be objectively reducible. To me, this is like saying the Notepad program in Microsoft Windows is simple, which it is for users, but most people would find the C code and Windows API programming that create the application to be impenetrable, and that’s before it’s compiled into binary code. Most users never see those underpinnings, just as we don’t have introspective access to the complex underpinnings of many aspects of consciousness, but that doesn’t mean they aren’t there.

  9. Self Aware

    You know what time is. You know what space is. You will never, ever be able to reduce them or define either of them. If that makes them ‘simple’ (your choice of word, not mine) then so is consciousness. You know what consciousness is – if you didn’t, you wouldn’t even feel the need to try to define it, would you ? You wouldn’t even begin to be aware of how unsatisfactory a definition might be. Your lack of satisfaction of the definition of consciousness (the phenomenon) perhaps gives you a clue that it indeed is irreducible.

    That’s the irony of consciousness denial – an oxymoron. You can’t deny that something like consciousness exists unless you know what it is in the first place. Therefore you acknowledge it’s existence by the very fact of trying to deny it.

    As for the underlying physical mechanism that may or may not cause consciousness, that model – being based upon a standard mechanical human comprehension – will always be reducible – up to the Base components of time, space and matter. But that’s an isolated and enclosed model- physical only in, physical only out – that can only be linked to consciousness by a non quantitative correlative leap of imagination. It has no bearing on the irreducibility of consciousness – irrelevant to it.

    I think you are failing to distinguish the phenomenon from the causes of the phenomenon. Common enough in this debate where people still argue that colous ‘are’ wavelengths of light.

    J

  10. John,
    My apology on the “simple” adjective. You didn’t use it and I didn’t mean to put words in your mouth. It’s what I get for having multiple conversations simultaneously. It resulted in crossover on my part. Sorry.

    On the irreducibility of space and time, interestingly, there are actually some physicists who say they may not be. Space may turn out to be composed of discrete components, perhaps sized at the Planck length, and time is often discussed as possibly being an emergent phenomenon.

    I can’t see the argument that consciousness exists at the same level as space and time, or any primary physics concept. Consciousness seems inherently a composite phenomenon. Of course, for each of us conscious entities, our own personal conscious experience is fundamental, but it only seems fundamental because we have no introspective access to the underlying functionality.

    One important point. Just because something can be reduced to constituent components doesn’t mean it doesn’t exist. We know a table can be described purely in terms of the material it is constructed out of, or in terms of atoms or elementary particles, but our ability to do that doesn’t remove the productivity of the table concept.

    On distinguishing between the physical causes of the phenomenon, and the phenomenon itself, I have to admit I’m a physicalist. I think once we account for all the physical functionality, we will have accounted for the thing itself. Maybe I’ll turn out to be wrong and there will remain something left unexplained. Only time will tell.

  11. Self

    The “reduction” of space you are referring to isn’t a semantic reduction. It’s a granularisation or quantizing of a continuum (space). Interesting, but certainly not reduction. Big space just consists of lots of little spaces.

    Space and time are not primarily physical concepts. They are primarily cognitive concepts, fundamental aspects of human cognition. That is why they are irreducible – as Kant recognised.

    By virtue of of an innate ability to slice and proportion them, we can use them to create ingenious cultural artefacts such as mathematics and physics. The flow of knowledge history is from cognition to culture synthesis – brains first, physics second. The problem that a lot of physicalists have (not saying this applies to you) is that they think physics isn’t a human activity – a biological artefact, in other words. Instead they regard it as a replacement for biblical definition. They think if it as a watermark of the universe itself – more real than its creators. Most mistakenly they don’t see how it must be restricted in its purview by the fact that it’s human creators are animals, subject to cognitive scope and limits.

    Once you accept how derivative a pursuit physics truly is, there is no problem in bunching consciousness with time and space as all three are not physics concepts, but basic cognitive ones.

    Consciousness isn’t composite. There are different aspects of the phenomena but the being aware is the being aware. It’s indivisible, and the core phenomena from which the composite facets emerge. I think you might be confusing consciousness with the consequences of consciousness here.

    Jbd

Leave a Reply

Your email address will not be published. Required fields are marked *