A Different Gap

gapWe’re often told that when facing philosophical problems, we should try to ‘carve them at the joints’. The biggest joint on offer in the case of consciousness has seemed to be the ‘explanatory gap’ between the physical activity of neurons and the subjective experience of consciousness. Now, in the latest JCS, Reggia, Monner, and Sylvester suggest that there is another gap, and one where our attention should rightly be focussed.

They suggest that while the simulation of certain cognitive processes has proceeded quite well, the project of actually instantiating consciousness computationally has essentially got nowhere.  That project, as they say, is affected by a variety of problems about defining and recognising success. But the real problem lies in an unrecognised issue: the computational explanatory gap. Whereas the explanatory gap we’re used to is between mind and brain, the computational gap is between high-level computational algorithms and low-level neuronal activity. At the high level, working top-down, we’ve done relatively well in elucidating how certain kinds of problem-solving, goal-directed kinds of computation work, and been able to simulate them relatively effectively.  At the neuronal, bottom-up level we’ve been able to explain certain kinds of pattern-recognition and routine learning systems. The two different kinds of processing have complementary strengths and weaknesses, but at the moment we have no clear idea of how one is built out of the other. This is the computational explanatory gap.

In philosophy, the authors plausibly claim, this important gap has been overlooked because in philosophical terms these are all ‘easy problem’ matters, and so tend to be dismissed as essentially similar matters of detail. In psychology, by contrast, the gap is salient but not clearly recognised as such: the lower-level processes correspond well to those identified as sub-conscious, while the higher-level ones match up with the reportable processes generally identified as conscious.

If Reggia, Monner and Sylvester are right, the well-established quest for the neural correlates of consciousness has been all very well, but what we really need is to bridge the gap by finding the computational correlates of consciousness. As a project, bridging this gap looks relatively promising, because it is all computational. We do not need to address any spooky phenomenology, we do not need to wonder how to express ‘what it is like’, or deal with anything ineffable; we just need to find the read-across between neural networking and the high-level algorithms which we can sort of see in operation. That may not be easy, but compared to the Hard Problem it sounds quite tractable. If solved, it will deliver a coherent account right from neural activity through to high-level decision making.

Of course, that leaves us with the Hard Problem unsolved, but the authors are optimistic that success might still banish the problem. They draw an analogy with artificial life: once it seemed obvious that there was a mysterious quality of being alive, and it was unclear how simple chemistry and physics could ever account for it. That problem of life has never been solved in terms, but as our understanding of the immensely subtle chemistry of living things has improved, it has gradually come to seem les and less obvious that it is really a problem. In a similar way the authors hope that if the computational explanatory gap can be bridged, so that we gradually develop a full account of cognitive processes from the ground-level firing of neurons up to high-level conscious problem-solving, the Hard Problem will gradually cease to seem like something we need to worry about.

That is optimistic, but not unreasonably so, and I think the new perspective offered is a very interesting and plausible one. I’m convinced that the gap exists and that it needs to be bridged: but I’m less sure that it can easily be done.  It might be that Reggia, Monner, and Sylvester are affected in a different way by the same kind of outlook they criticise in philosophers: these are all computational problems, so they’re all tractable. I’m not sure how we can best address the gap, and I suspect it’s there not just because people have failed to recognise it, but because it is also genuinely difficult to deal with.

For one thing, the authors tend to assume the problem is computational. It’s not clear that computation is of the essence here. The low-level processes at a neuronal level don’t look to be based on running any algorithm – that’s part of the nature of the gap. High-level processes may be capable of simulation algorithmically, but that doesn’t mean that’s the way the brain actually does it. Take the example of catching a ball – how do we get to the right place to intercept a ball flying through the air?  One way to do this would be some complex calculations about perspective and vectors: the brain could abstract the data, do the sums, and send back the instructions that result. We could simulate that process in a computer quite well. But we know – I think – that that isn’t the way it’s actually done: the brain uses a simpler and quicker process which never involves abstract calculation, but is based on straight matching of two inputs; a process which incidentally corresponds to a sub-optimal algorithm, but one that is good enough in practice. We just run forward if the elevation of the ball is reducing and back if it’s increasing. Fielders are incapable of predicting where a ball is going, but they can run towards the spot in such a way as to be there when the ball arrives.  It might be that all the ‘higher-level’ processes are like this, and that an attempt to match up with ideally-modelled algorithms is therefore categorically off-track.

Even if those doubts are right, however, it doesn’t mean that the proposed re-framing of the investigation is wrong or unhelpful, and in fact I’m inclined to think it’s a very useful new perspective.