Bridging the Brain

How can we find out how the brain works? This was one of five questions put to speakers at the Cognitive Computational Neuroscience conference, and the answers, posted on the conference blog, are interesting. There seems to be a generally shared perspective that what we need are bridges between levels of interpretation, though there are different takes on what those are likely to be and how we get them. There’s less agreement about the importance of recent advances in machine learning and how we respond to them.

Rebecca Saxe says the biggest challenge is finding the bridge between different levels of interpretation – connecting neuronal activity on the one hand with thoughts and behaviour on the other. She thinks real progress will come when both levels have been described mathematically. That seems a reasonable aspiration for neurons, though the maths would surely be complex; but the mind boggles rather at the idea of expressing thoughts mathematically. It has been suggested in the past that formal logic was going to be at least an important part of this, but that hasn’t really gone well in the AI field and to me it seems quite possible that the unmanageable ambiguity of meanings puts them beyond mathematical analysis (although it could be that in my naivety I’m under-rating the subtlety of advanced mathematical techniques).

Odelia Schwartz looks to computational frameworks to bring together the different levels; I suppose the idea is that such frameworks might themselves have multiple interpretations, one resembling neural activity while another is on a behavioural level. She is optimistic that advances in machine learning open the way to dealing with natural environments: that is a bit optimistic in my view but perhaps not unreasonably so.

Nicole Rust advocates ‘thoughtful descriptions of the computations that the brain solves’. We got used, she rightly says, to the idea that the test of understanding was being able to build the machine. The answer to the problem of consciousness would not be a proof, but a robot. However, she points out, we’re now having to deal with the idea that we might build successful machines that we don’t understand.
Another way of bridging between those levels is proposed by Birte Forstmann: formal models that make simultaneous predictions about different modalities such as behavior and the brain. It sounds good, but how do we pull it off?

Alona Fyshe sees three levels – neuronal, macro, and behavioural – and wants to bring them together through experimental research, crucially including real world situations: you can learn something from studying subjects reading a sentence, but you’re not getting the full picture unless you look at real conversations. It’s a practical programme, but you have to accept that the correlations you observe might turn out complex and deceptive; or just incomprehensible.

Tom Griffiths has a slightly different set of levels, derived from David Marr; computational, algorithmic, and implementation. He feels the algorithmic level has been neglected; but I’d say it remains debatable whether the brain really has an algorithmic level. An algorithm implies a tractable level of complexity, whereas it could be that the brain’s ‘algorithms’ are so complex that all explanatory power drains away. Unlike a computer, the brain is under no obligation to be human-legible.

Yoshua Bengio hopes that there is, at any rate, a compact set of computational principles in play. He advocates a continuing conversation between those doing deep learning and other forms of research.
Wei Ji Ma expresses some doubt about the value of big data; he favours a diversity of old-fashioned small-scale, hypothesis-driven research; a search for evolutionarily meaningful principles. He’s a little concerned about the prevalence of research based on rodents; we’re really interested in the unique features of human cognition, and rats can’t tell us about those.

Michael Shadlen is another sceptic about big data and a friend of hypothesis driven research, working back from behaviour; he’s less concerned with the brain as an abstract computational entity and more with its actual biological nature. People sometimes say that AI might achieve consciousness by non-biological means, just as we achieved flight without flapping wings; Shadlen, on that analogy, still wants to know how birds fly.

If this is a snapshot of the state of the field, I think it’s encouraging; the outlooks briefly indicated here seem to me to show good insight and promising outlooks. But it is possible to fear that we need something more radical. Perhaps we’ll only find out how the brain works when we ask the right questions, ones that realign our view in such a way that different levels of interpretation no longer seem to be the issue. Our current views are dominated by the concept of consciousness, but we know that in many ways that is a recent idea, primarily Western and perhaps even Anglo-Saxon. It might be that we need a paradigm shift; but alas I have no idea where that might come from.

Tallis vs Eagleman

The Observer has a discussion between Tallis and Eagleman, Eagleman representing neural reductionism and Tallis speaking for a more traditional view of mind and brain.

Although it’s worth reading, it turns out a slightly inconclusive encounter. Perhaps on this occasion you’d give Tallis a points victory because he does seem to be looking for a fight, whereas Eagleman is in rather cautious form. They circle each other but never quite identify a proposition which sums up their disagreement clearly enough to get things going.

What seems to emerge is a kind of agreement that mental activity needs to be addressed on more than one level of explanation, with the two antagonists merely giving a different balance of emphasis. This certainly understates the real disagreement between the two.

I think it probably is the case that nearly everyone grants the need for more than one level of explanation. There are those who would say the correct top level is the cosmos itself and that individual consciousness expresses a universal entity.  Not quite as high-level as that we surely need to address consciousness on the level of its explicit and social content; we could call this the ‘home’ level because it is sort of where we live, where we actually experience the world. Most would agree that there are levels of unconscious operation that are also a necessary part of the picture; not many people would say that the structure of the brain and its component neurons tell us nothing; and a majority nowadays would agree that there is ultimately a story at the classical molecular level which, though vastly complex, cannot  be ignored. Some say even this is not enough and that consciousness cannot be understood without giving quantum mechanics, or some as-yet-unknown lower level theory, a crucial role.

Only a very hard-line reductionist would say we only need one of these levels: it’s generally accepted that there are interesting things to be said on several of them which simply cannot be addressed at other levels. What mainly emerges here is Tallis’ defence of the ‘home’ level against Eagleman’s contention that we pay it too much attention and that for many purposes, including our treatment of crime and punishment, we should dethrone it. Intuitively, the motives for Tallis’ incredulity are pretty clear: wouldn’t it be weird if we had developed the apparatus of thought and consciousness and yet it had no important impact on our behaviour? Don’t we just know that discussion and conscious thought ultimately shape what we do, even if our behaviour is sometimes nudged in different directions by factors we’re not aware of?

Yet there is something deeply unsatisfactory about the whole idea of different levels of explanation, isn’t there? How can one reality require half-a-dozen different accounts? It seems a distressingly messy and arbitrary kind of way for the world to be set up, and certainly we greet any successful reduction of higher level entities to lower level ones as a valuable explanatory achievement. So it’s not hard to sympathise with Eagleman’s desire to emphasise the role of levels below consciousness either.

Generally speaking it seems that the lower the level of our explanation the better, as though ultimate reality resides at the lowest micro level we can get to. We always celebrate reductions, not elaborations. Yet there have been some rebellious attempts to push things the other way through ideas such as emergence and embodiment, which claim the whole can be more important than the parts. I notice myself that things seem to come most clearly into focus at or slightly below the home level: if we go far above or below that we start to get into regions where we have to deal in probabilities or slightly fuzzy concepts. Most notably there don’t seem to be identities at other levels in quite the sharp way there are on the home level. Even molecules are interchangeable: they tell us that it’s almost certain we’re breathing at least one atom from the oxygen previously breathed by Julius Caesar, but how could you possibly tell? You can’t label an atom. Another one of the same kind is distinguishable only by where it is. When we go further down even spatial positions start to get a bit smudged. Equally if we start going up the chain we can only draw slightly fuzzy conclusions about what my family, or the society I live in, thinks or does. This might be a reason to think that real reality is around the home level – or it might a reason to think that the whole business of levels simply flows from my restricted viewpoint and limited understanding.

Perhaps, if my brain were capable of holding it, there is a view on which all the levels could come together. After all, I go on thinking about temperature even though I know it is only molecular motion: perhaps in the end we’ll find a way of thinking about the different aspects of mental activity which brings them together without eliminating anything. Perhaps then it might become clear that Eagleman and Tallis don’t really disagree at all. I wouldn’t put any money on that, though.