Bridging the Brain

How can we find out how the brain works? This was one of five questions put to speakers at the Cognitive Computational Neuroscience conference, and the answers, posted on the conference blog, are interesting. There seems to be a generally shared perspective that what we need are bridges between levels of interpretation, though there are different takes on what those are likely to be and how we get them. There’s less agreement about the importance of recent advances in machine learning and how we respond to them.

Rebecca Saxe says the biggest challenge is finding the bridge between different levels of interpretation – connecting neuronal activity on the one hand with thoughts and behaviour on the other. She thinks real progress will come when both levels have been described mathematically. That seems a reasonable aspiration for neurons, though the maths would surely be complex; but the mind boggles rather at the idea of expressing thoughts mathematically. It has been suggested in the past that formal logic was going to be at least an important part of this, but that hasn’t really gone well in the AI field and to me it seems quite possible that the unmanageable ambiguity of meanings puts them beyond mathematical analysis (although it could be that in my naivety I’m under-rating the subtlety of advanced mathematical techniques).

Odelia Schwartz looks to computational frameworks to bring together the different levels; I suppose the idea is that such frameworks might themselves have multiple interpretations, one resembling neural activity while another is on a behavioural level. She is optimistic that advances in machine learning open the way to dealing with natural environments: that is a bit optimistic in my view but perhaps not unreasonably so.

Nicole Rust advocates ‘thoughtful descriptions of the computations that the brain solves’. We got used, she rightly says, to the idea that the test of understanding was being able to build the machine. The answer to the problem of consciousness would not be a proof, but a robot. However, she points out, we’re now having to deal with the idea that we might build successful machines that we don’t understand.
Another way of bridging between those levels is proposed by Birte Forstmann: formal models that make simultaneous predictions about different modalities such as behavior and the brain. It sounds good, but how do we pull it off?

Alona Fyshe sees three levels – neuronal, macro, and behavioural – and wants to bring them together through experimental research, crucially including real world situations: you can learn something from studying subjects reading a sentence, but you’re not getting the full picture unless you look at real conversations. It’s a practical programme, but you have to accept that the correlations you observe might turn out complex and deceptive; or just incomprehensible.

Tom Griffiths has a slightly different set of levels, derived from David Marr; computational, algorithmic, and implementation. He feels the algorithmic level has been neglected; but I’d say it remains debatable whether the brain really has an algorithmic level. An algorithm implies a tractable level of complexity, whereas it could be that the brain’s ‘algorithms’ are so complex that all explanatory power drains away. Unlike a computer, the brain is under no obligation to be human-legible.

Yoshua Bengio hopes that there is, at any rate, a compact set of computational principles in play. He advocates a continuing conversation between those doing deep learning and other forms of research.
Wei Ji Ma expresses some doubt about the value of big data; he favours a diversity of old-fashioned small-scale, hypothesis-driven research; a search for evolutionarily meaningful principles. He’s a little concerned about the prevalence of research based on rodents; we’re really interested in the unique features of human cognition, and rats can’t tell us about those.

Michael Shadlen is another sceptic about big data and a friend of hypothesis driven research, working back from behaviour; he’s less concerned with the brain as an abstract computational entity and more with its actual biological nature. People sometimes say that AI might achieve consciousness by non-biological means, just as we achieved flight without flapping wings; Shadlen, on that analogy, still wants to know how birds fly.

If this is a snapshot of the state of the field, I think it’s encouraging; the outlooks briefly indicated here seem to me to show good insight and promising outlooks. But it is possible to fear that we need something more radical. Perhaps we’ll only find out how the brain works when we ask the right questions, ones that realign our view in such a way that different levels of interpretation no longer seem to be the issue. Our current views are dominated by the concept of consciousness, but we know that in many ways that is a recent idea, primarily Western and perhaps even Anglo-Saxon. It might be that we need a paradigm shift; but alas I have no idea where that might come from.