A Different Gap

gapWe’re often told that when facing philosophical problems, we should try to ‘carve them at the joints’. The biggest joint on offer in the case of consciousness has seemed to be the ‘explanatory gap’ between the physical activity of neurons and the subjective experience of consciousness. Now, in the latest JCS, Reggia, Monner, and Sylvester suggest that there is another gap, and one where our attention should rightly be focussed.

They suggest that while the simulation of certain cognitive processes has proceeded quite well, the project of actually instantiating consciousness computationally has essentially got nowhere.  That project, as they say, is affected by a variety of problems about defining and recognising success. But the real problem lies in an unrecognised issue: the computational explanatory gap. Whereas the explanatory gap we’re used to is between mind and brain, the computational gap is between high-level computational algorithms and low-level neuronal activity. At the high level, working top-down, we’ve done relatively well in elucidating how certain kinds of problem-solving, goal-directed kinds of computation work, and been able to simulate them relatively effectively.  At the neuronal, bottom-up level we’ve been able to explain certain kinds of pattern-recognition and routine learning systems. The two different kinds of processing have complementary strengths and weaknesses, but at the moment we have no clear idea of how one is built out of the other. This is the computational explanatory gap.

In philosophy, the authors plausibly claim, this important gap has been overlooked because in philosophical terms these are all ‘easy problem’ matters, and so tend to be dismissed as essentially similar matters of detail. In psychology, by contrast, the gap is salient but not clearly recognised as such: the lower-level processes correspond well to those identified as sub-conscious, while the higher-level ones match up with the reportable processes generally identified as conscious.

If Reggia, Monner and Sylvester are right, the well-established quest for the neural correlates of consciousness has been all very well, but what we really need is to bridge the gap by finding the computational correlates of consciousness. As a project, bridging this gap looks relatively promising, because it is all computational. We do not need to address any spooky phenomenology, we do not need to wonder how to express ‘what it is like’, or deal with anything ineffable; we just need to find the read-across between neural networking and the high-level algorithms which we can sort of see in operation. That may not be easy, but compared to the Hard Problem it sounds quite tractable. If solved, it will deliver a coherent account right from neural activity through to high-level decision making.

Of course, that leaves us with the Hard Problem unsolved, but the authors are optimistic that success might still banish the problem. They draw an analogy with artificial life: once it seemed obvious that there was a mysterious quality of being alive, and it was unclear how simple chemistry and physics could ever account for it. That problem of life has never been solved in terms, but as our understanding of the immensely subtle chemistry of living things has improved, it has gradually come to seem les and less obvious that it is really a problem. In a similar way the authors hope that if the computational explanatory gap can be bridged, so that we gradually develop a full account of cognitive processes from the ground-level firing of neurons up to high-level conscious problem-solving, the Hard Problem will gradually cease to seem like something we need to worry about.

That is optimistic, but not unreasonably so, and I think the new perspective offered is a very interesting and plausible one. I’m convinced that the gap exists and that it needs to be bridged: but I’m less sure that it can easily be done.  It might be that Reggia, Monner, and Sylvester are affected in a different way by the same kind of outlook they criticise in philosophers: these are all computational problems, so they’re all tractable. I’m not sure how we can best address the gap, and I suspect it’s there not just because people have failed to recognise it, but because it is also genuinely difficult to deal with.

For one thing, the authors tend to assume the problem is computational. It’s not clear that computation is of the essence here. The low-level processes at a neuronal level don’t look to be based on running any algorithm – that’s part of the nature of the gap. High-level processes may be capable of simulation algorithmically, but that doesn’t mean that’s the way the brain actually does it. Take the example of catching a ball – how do we get to the right place to intercept a ball flying through the air?  One way to do this would be some complex calculations about perspective and vectors: the brain could abstract the data, do the sums, and send back the instructions that result. We could simulate that process in a computer quite well. But we know – I think – that that isn’t the way it’s actually done: the brain uses a simpler and quicker process which never involves abstract calculation, but is based on straight matching of two inputs; a process which incidentally corresponds to a sub-optimal algorithm, but one that is good enough in practice. We just run forward if the elevation of the ball is reducing and back if it’s increasing. Fielders are incapable of predicting where a ball is going, but they can run towards the spot in such a way as to be there when the ball arrives.  It might be that all the ‘higher-level’ processes are like this, and that an attempt to match up with ideally-modelled algorithms is therefore categorically off-track.

Even if those doubts are right, however, it doesn’t mean that the proposed re-framing of the investigation is wrong or unhelpful, and in fact I’m inclined to think it’s a very useful new perspective.

 

18 thoughts on “A Different Gap

  1. Ooh Peter, there is so much to discuss here, how will I manage to keep this message relatively short?
    I’ll try to restrict to a few refs, as pointers to directions of study that I am aware of and are specifically trying to bridge the computational gap. I don’t have direct access to JCS, so all what follows is based on your commentary, not the article you are referring to.

    First of all, it may be the case that the parallel between the “Hard” and “Computational” problems is new, but of course lots of people are well aware of the computational gap, and efforts trying to bridge it are not uncommon.
    First: Tononi’s Information Integration Theory (IIT) is strong on this side. It approaches the issue from the pure mathematical side, so it does risk going off-track from the very start. However, the nice solution implemented in IIT is that it is built from fundamental and very abstract principles, so it could in principle be used to guide our understanding of actual implementations. IIT version 3.0 (Open Access, full ref. is below) is the latest edition, and to me it looks like a theoretical top-down attempt to bridge the computational gap [Side note: IIT also postulates that Information Integration is all you need to generate phenomenal experience, I think, and I’m utterly unconvinced by this claim.]

    Second, I think you are right:

    It might be that all the ‘higher-level’ processes are like this [suboptimal algorithms that work well enough], and that an attempt to match up with ideally-modelled algorithms is therefore categorically off-track.

    I guess that to avoid this trap (apart from all-embracing generalisations like in IIT), the best way forward is to tap in evolutionary (and in parallel, developmental) considerations. One old (but still relevant, unless I’ve missed something revolutionary) attempt to sketch why and how to proceed in this way comes from Gary Marcus (full ref. below – not open access, I have a copy, if you know what I mean…). It is still top-down, so it doesn’t get anywhere close to actual neurons, but it’s worth mentioning because it does try to include multidisciplinary empirical knowledge, and I don’t think we can hope to proceed otherwise. Note also how Marcus describes the computational gap in his own, and very modern terms here.

    Third and final: (in my opinion) a credible idea that in theory can bridge the computational gap already exists. It’s called predictive coding (PC), and, perhaps unsurprisingly, I find it difficult to describe it as either top-down or bottom-up; to me, it’s both: on one side, the overall concept is firmly top-down but on the other side, we do have examples of real neuronal circuits that seem to work exactly in the expected way, so there is also a very important (and what I find convincing) bottom-up component. On the philosophical camp, the champion of this approach is Andy Clark, you can find a gentle introduction here and a full-length, but still clear and enjoyable, scholarly discussion (not sure if it’s OA, but link sends to the full PDF: main article with replies).

    Why do I find PC convincing? Because:
    A. It’s compatible with Marcus’ approach, in the sense that it is absolutely straightforward to propose a credible evolutionary path that paved the way from the first instantiation of a predictive coding neural structure to much more complex cognition and, at the same time, the framework allows to propose how development and learning might work.
    B. It’s parsimonious, both in terms of the theory itself and in terms of the architecture it proposes. More in detail, it suggests that signals are the bottleneck and PC is the way brain reduce the amount of information that needs to travel across sub-systems. It’s less parsimonious with actual computations, but in turn it does explain how neural connections would encode for recorded information (memory).
    C. It’s surprisingly related to Shannon’s Information Theory, making it appealing from a computer-science point of view.
    D. The generic algorithm is truly general purpose, so it’s quite possible that it is actually instantiated many times in a hierarchical structure, and thus the (genetic) instructions to generate the basic neural architecture can be expected do be recycled over an over in different parts of the brain.
    E. If predictive coding is indeed ubiquitous, then we do know how to build the computational bridge, and in fact, there is plenty of people that are very busy building it.

    On the other hand, evolution likes to tinker, so it’s likely that, even if the general idea does hold, we’ll find that it is instantiated in a virtually endless number of slightly different ways, making the process of completing the bridge painstakingly slow and messy.

    References:

    Oizumi, M., Albantakis, L., & Tononi, G. (2014). From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0. PLoS computational biology, 10(5), e1003588.

    Marcus, G. (2009). How does the mind work? Insights from biology. Topics in cognitive science, 1(1), 145-172.

    Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(03), 181-204.

  2. This is another problem I think Blind Brain Theory handily resolves. The crazy coincidence, Peter, is that I’ve been using the gaze heuristic as a way to illustrate what’s going on! I actually have a post on this in utero (that’s twice you’ve scooped me!) that I’ll be putting up soon.

    A great lens for this problem can be found in Piccinini and Craver’s attempt to extend the latter’s notion of ‘mechanism sketches’ to the kinds of functional analyses characteristic of psychology. Look, they say, there has to be a reason why psychologists take neuroscientific data seriously when formulating their analyses. The reason for that, they contend, is that their analyses are actually ‘sketches’ of neural mechanisms. This explains why they are all ‘quasi-mechanistic.’ But as others (Stinson has the most lucid and succinct case I know of) have pointed out, these analyses can’t be ‘sketches’ of mechanisms that neuroscience will fill in because they quite often contradict the systems they generalize over: while retaining their ability to predict subject responses, etc. Whatever functional analyses are tracking, it ain’t the mechanisms!

    Thus the ‘other hard problem.’

    You’ve all heard me rail about the ‘curse of dimensionality,’ the difficulty of isolating useful patterns in monstrous data sets. Dimensionality is the primary metaphor I use to capture the ‘spookiness’ of intentional phenomena on a gradient continuous with the high dimensionality of natural phenomena. It’s what allows us to see the ‘inexplicables’ as the product of dimensional neglect (Eliasmith’s ‘semantic pointers’ actually turn on strategic neglect – Thagard actually has a new theory of consciousness based on Eliasmith’s work that everyone here should check out).

    Now consider the gaze heuristic, where fielders simply follow the balls to where it lands. With Gaze, fixing the ball in the visual field allows the ball itself to constrain locomotion, allows the fielder to ‘lock into’ the ball-flying system in a manner that solves the system, neglecting detailed information regarding velocity, trajectory, and intervening conditions. One might liken gaze to a simple key that when applied to certain locks reliably opens them despite remaining ignorant of the lock.

    I think you’re entirely right, Peter. To assume that functional analyses ‘pick out’ actual algorithmic processes in the brain is to assume that functional analyses are not actually gaze-like ‘lock into’ devices, that is, special purpose heuristics that allow the solution of specialized problem ecologies by ‘locking psychologists into’ the dimensional madness of brain function in happy ways. They provide an empirically regimented way of tackling problems *neglecting what is actually going on.*

    This would be why no one can organize functional analyses into anything that doesn’t resemble a toolbox, a practical bric-a-brac. This would be why functional analyses seem so ‘spooky,’ each hanging alone, with little more suggestive hints to knit them together. This would be why functional analyses cannot be ‘reduced’ to Craverian mechanism sketches. Because, in point of fact, *there are no such THINGS.* Psychologists are not in the business of describing something ‘spooky’ that somehow ‘supervenes’ on neuromechanistic function, so much as they are in the business of HACKING neuromechanistic functions in heavily regimented problem ecologies.

    In fact, it’s pretty easy to argue that this HAS to be the case, given the paucity of the information psychologists actually have to go on isolating their ‘functions.’ Like professional fielders, they’ve found a way to let the ball steer them to the catch.

    Of course, there’s much, much more to say here, especially regarding that biggest and fattest of sacred psycho-scientific cows, CONTENT. But BBT makes hash of that as well.

    I keep telling you guys… All the old problems simply fall away when you tip them upside down and you start looking at them in terms of heuristic neglect. Just remember you heard it here first!

  3. Peter, I think the way to make the problem more understandable would be to state it as a relational explanatory gap. Computationalism serves us for externalizing the problem but in reality the fielder is never really looking at a ball but looking at his own body state including the ball which the visual field actually makes a part of his own being. Relationalism is more salient to the problem because the brain is actually a hierarchical organ that stitches together internal body states with perceptions from the specialized cortical areas or the brain itself is a supercortex that overlays (relational) the specialized cortical areas.

  4. Computational analogies. None of what you mention refers to the chemical structure of a neuron. Do the work. Where is the neural capacity for “computation”? They are chemical channels to and from functional sites. The brain is not a Homunculus with its own will to direct the body by “computation”, just do the work on neurons instead of “pie in the sky” stuff.

  5. Specifically – “All the old problems simply fall away when you tip them upside down and you start looking at them in terms of heuristic neglect.” Perhaps (?) there is something to this, but I cannot even make a start because it lays no tracks.

    Can anyone decipher this? Knowledge begins and ends at clear expression. Does anyone want to engage in specifics about neurons and bodily functions rather than simply “describing” their first person experiences in imaginative terms?

  6. Morgan: I’m just trying to get a fix on your criticism. I literally don’t know what’s causing you such difficulty. If you agree sciences like psychology are fraught with conceptual confusions, then you agree that reconceptualization is required. But if you agree with that, then I really have no idea what you mean about ‘engaging in specifics about neurons.’ The whole problem Peter’s discussing here is the problem of conceptualizing specifics about neurons, is it not?

    I appreciate that I have my head up my ass about a good number of things, but this…

  7. Difficulty? Your position is upside-down to start. Difficulty is the name of the game. The issue is warrant. You think its easy to throw around “concepts” about “neurons”? I find if you root out specific facts about neurons, the concepts appear without the need for obfuscation. I’ve written extensively about this, so it doesn’t bear repeating here. Not sure of your vintage Bakker, but with maturity one finds things fall into place without being forced.

  8. Oh, I just noticed you didn’t answer the question thrown back at you, because you are “just trying to get a fix on my criticism” – by asking the most open ended possible question. I would prefer you answer the question, rather than dodge and then use the dodge as an excuse to throw up an obfuscation about my position. Anyway, who cares?

  9. Morgan: I’m still trying to pin down your complaint. You say, “I find if you root out specific facts about neurons, the concepts appear with need for obfuscation.” Okay. So your position is that the problem of mapping the mental onto the neural is one that will sort itself out as neuroscience matures, and so all attempts to reconceptualise the problem are intrinsically worthless, or ‘obfuscation’?

    The question of what aren’t hallmarks of conceptual confusion I took to be a rhetorical dodge of my question. But I’ll bite. Since it seems rather obvious that cognitive science is thoroughly infiltrated by philosophers, it seems pretty obvious, to me at least, that cognitive science is suffering from conceptual confusion over and above that suffered in other sciences. The clearest sign of the absence of conceptual confusion is the absence of philosophers on the front line of a debate. We’re scavengers. We go where the meat is!

    My question, meanwhile, still stands – because I still don’t know what you’re talking about!

  10. Bakker, to answer your further question, no, only most attempts at conceptualization. The one noted above in comment 6 is included in that category in my view. Bakker, we are beyond deciphering it, which was my original request. You have taken another course. We are beyond you answering my question, as you have just ignored that obligation. But we not beyond your first “split” in the big question you ask in response to my request at deciphering, and you repeat your curiosity about what I think, above.

    Bakker, if you are so interested in what I think, read my work. You like big questions, then expect big answers, so go and read it, and don’t make excuses. You asked for it. I find it strange that you have such a broad interest in what I think and you never thought to click my name and check it. But then again, perhaps people these days like things on a plate and have little initiative to satisfy their evident curiosity. A general problem.

    Bakker, you have said nothing about neurons in this post. You have said nothing about concepts. Talking about the fact that there are neurons and concepts is of no value. This is a prime example of split “nothing” discussion, perhaps aimed at wasting time, who knows? Who cares, this is downtime anyway.

  11. Bakker: Before I finish feeding off the feeds kindly provided by Peter, let me clarify in case you are under the misapprehension I am a philosopher, included in “we”. I am a lawyer of 30 years standing, and its my view expressed in my work that science and philosophy (same thing, one tending empirical one tending rational) have utterly failed conceptually AND empirically (which, as I say, go together comfortably). I appreciate your fun and games, but the issue is much broader – about respect for nature including human nature, or trashing it.

  12. Sergio Graziosi,

    Regarding the Predictive Coding theory — how does it relate to consciousness? As I understand it is a proposal for WHAT the mind does (and not HOW it does what it does). But is there room for consciousness in this theory?

  13. ithio:
    relating Predictive Coding (PC) to our struggles with consciousness would require a long discussion, I have an almost fully formed idea on how to do so, but can’t possibly hope to explain it here appropriately.

    However, I’m glad you’ve asked, and will try to provide a few hints.
    1. PC proposes an almost general purpose, algorithmic function that ties perception with learning.
    2. Because the same algorithm can be used to process information to different levels of abstraction, it is evolutionarily convincing: to get more and more sophisticated “cognition”, you just need to add more and more layers, so evolution doesn’t need to invent new systems, it just recycles what’s already there (something that evolution does well).
    3. We do have an emerging idea of how PC is implemented by neural networks; as far as I know, it is still a fairly preliminary family of sketches, but it means that the potential of bridging the computational gap is there. We don’t know if PC is enough or relevant to bridge this gap, but we do know it is worth exploring the possibility.
    4. Being general purpose, PC allows to extend existing ideas. For example, one could try to transform Therub’s retinoid theory in a PC-enabled version, and doing so allows to extend the same idea well beyond the original visual/spatial implementation.
    5. As such, PC is beautifully positioned to provide a solution to the issue that Scott Bakker describes here as “no one can organize functional analyses into anything that doesn’t resemble a toolbox, a practical bric-a-brac.”

    It does not directly tackle or solve the hard problem, of course, but it’s a functionalist view of how the brain may work, so it’s a start. The missing link, the infamous “explanatory gap” about the original hard problem can’t come from PC directly, but hints exist. Scott’s BBT, and Loorits’ structural qualia provide two (compatible) accounts as to why the gap exists, and my hunch is that they are right. The last piece in the puzzle would be the reason why Phenomenal Experience needs to exist, and my take is that this should be (but of course, isn’t) obvious: you need experience to learn (hence the link to PC). Plug in some meta-cognition, the ability to think about your own thoughts, revisit your memories (of past perceptions), and you find the hard problem, which can’t be solved by meta-cognition directly because meta-cognition is necessarily blind to all the monstrously complex processing that happens behind the scenes.

    As I’ve said, you’d need two or three full-length essays to substantiate all I’m saying, but perhaps this is enough to tickle your curiosity…

  14. Consciousness is separateness. All conscious experience seems to be solo and you cannot share it with anyone or anything else. It’s like the oneness of the universe (imagine a half-inflated balloon) can be pinched at points to raise up and have a different viewpoint (imagine a ball with lots of prickly bits). Or maybe like compressing matter into a neutron star to get intense gravity in a smaller location.

    What if there is consciousness everywhere (perhaps in very tiny quantities), but because it is a self-only thing, we cannot perceive it – like not seeing the wood for the trees. Maybe even basic computer software has a tiny amount of consciousness (albeit without ‘free will’ etc), but we discount that as being likely. Maybe Google’s algorithm is a bit more conscious, and IBM’s Watson a little more so (though still not very much).

    Perhaps the brain massively amplifies consciousness by squeezing lots of little bits of consciousness together using its massively parallel architecture. Maybe self-consciousness is a matter of amplification. Our brains could be consciousness amplifiers. The bigger the brain the more consciousness…

Leave a Reply

Your email address will not be published. Required fields are marked *