Mapping the Connectome

Could Artificial Intelligence be the tool that finally allows us understand the natural kind?

We’ve talked before about the possibility; this Nautilus piece explains that scientists at the Max Planck Institute Of Neurobiology have come up with a way of using ‘neural’ networks to map, well, neural networks. There has been a continued improvement in our ability to generate images of neurons and their connections, but using those abilities to put together a complete map is a formidable task; the brain has often been described as the most complex object in the universe and drawing up a full schematic of its connections is potentially enough work for a lifetime. Yet that map may well be crucial; recently the idea of a ‘connectome’ roughly equivalent to the ‘genome’ has become a popular concept, one that suggests such a map may be an essential part of understanding the brain and consciousness. The Max Planck scientists have developed an AI, called ‘SyConn’ which tu4ns images into maps automatically with very high accuracy. In principle I suppose this means we could have a complete human connectome in the not-too-distant future.

How much good would it do us, though? It can’t be bad to have a complete map, but there are three big problems. The first is that we can already be pretty sure that connections between neurons are not the whole story. Neurons come in many different varieties, ones that pretty clearly seem to have different functions – but it’s not really clear what they are. They operate with a vast repertoire of neurotransmitters, and are themselves pretty complex entities that may have genuine computational properties all on their own. They are supported by a population of other, non-networked cells that may have a crucial role in how the overall system works. They seem to influence each other in ways that do not require direct connection; through electromagnetic fields, or even through DNA messages. Some believe that consciousness is not entirely a matter of network computation anyway, but resides in quantum or electrical fields; certainly the artificial networks that were originally inspired by biological neurology seem to behave in ways that are useful but quite distinct from those of human cognition. Benjamin Libet thought that if only he could do the experiment, he might be able to demonstrate that a sliver of the brain cut out from its neighbouring tissue but left in situ would continue to do its job. That, surely, is going too far; the brain didn’t grow all those connections with such care for nothing. The connectome may not be the full story, but it has to be important.

The second problem, though, is that we might be going at the problem from the wrong end. A map of the national road network tells us some useful things about trade, but not what it is or, in the deeper sense, how it works. Without those roads, trade would not occur in anything like the same form; blockages and poor connections may hamper or distort it, and in regions isolated by catastrophic, um, road lesions, trade may cease altogether. Of course to understand things properly we should need to know that there are different kinds of vehicle doing different jobs on those roads, that places may be connected by canals and railways as well as roads, and so on. But more fundamentally, if we start with the map, we have no idea what trade really is. It is, in fact, primarily driven and determined by what people want, need, and believe, and if we fall into the trap of thinking that it is wholly determined by the availability of trucks, goods, and roads we shall make a serious mistake.

Third, and perhaps it’s the same problem in different clothes, we still don’t have any clear and accepted idea of how neurology gives rise to consciousness anyway. We’re not anywhere near being able to look at a network and say, yup, that is (or could be) conscious, if indeed it is ever possible to reach such a conclusion.

So do we really even want a map of the connectome? Oh yes…

5 thoughts on “Mapping the Connectome

  1. The prospects seem hopeless for “discovering” sufficient or even necessary neuro-logical conditions for experiential awareness. Inferences from neurology (whether of types or tokens) to experience (whether of types or tokens) are paradigmatic non-sequiturs. Quintessential non-sequiturs. Our Western notion of experiential awareness or “felt” subjectivity stands more in need of conceptual unpacking than neurology stands in need of empirical unpacking.

  2. Peter,
    I think your final point catches it. Having the connectome almost certainly won’t give us the full story, but it will tell us things we don’t currently know. And if we do it at the level that c-elegans was mapped, and store it in a database available to researchers worldwide, I think it will enable breakthroughs we can’t currently imagine.

    It will reveal a lot about how cognition overall happens. Knowing all the pathways between, say, the superior colliculus and the amygdala, between the pulvinar and the anterior cingulate cortex, or between the hippocampus and the cortical regions, will give us powerful clues to how attention, memory, emotional feelings, spatial navigation, and many other things come about.

    Will it reveal consciousness? We’ll first have to agree on what that term means, at least beyond things like “subjective experience” or similar phenomenal descriptions.

  3. Is all of history and our industrial revolution-including AI, just a map of our past’s biological evolution…
    …that maps have been needed since we followed scents and foot prints, painted in caves, spoke languages…

    Now, we map ourselves as if we were ‘kind’ of a static relativity not subject to the next dual particle wave…

    Doesn’t “Who is present”, present itself as hugely necessary for connectome science to be a map of any use…
    …That neural systems have evolved, to Who in Nature, equal to What in Nature…

  4. Yeah Peter,

    Looking at the physical wiring is simply the wrong lens for understanding consciousness. Cognition is about information processing in my view, and that’s the provide of computer science (applied mathematics). So how is knowledge represented and used to achieve goals? are the correct questions to ask, not physical details. Those positing special physical mechanisms for consciousness aren’t even in the right ball-park for an explanation. Computer science (applied mathematics), not physics, is the right lens for understanding this.

    Here’s the outline of my current proposed solution to consciousness (cross-posted from Scott Aaronson’s blog). You’ll see it’s a computer science explanation I’m postulating , it’s *not* about any physical details.

    Solution to consciousness in less than 200 words:

    ‘Consciousness is a symbolic language for modelling time (TPTA – Temporal Perception & Temporal Action)!

    There are two types of time – (1) logical time – a high-level abstract tree of the structure of a logical argument – call this an ‘argument tree’, and (2) physical time – a low-level tree showing counter-factual possibilities representing physical causality– call this a ‘grammar tree’. Both types of time are represented by ‘computation trees’ , an extension of temporal logic (a type of modal logic).

    Consciousness arises when the argument trees (representing logical time) are integrated with the grammar trees (representing physical time) to form an internal ‘self-model’ – call this a ‘narrative tree’ (or cognitive time). The argument tree lets us plan for the future (Temporal Action), the grammar tree lets us reflect on the past (Temporal Perception), and the narrative tree (the self-model) is for communicating our intentions in the present (Choice). TPTA – Temporal Perception & Temporal Action !’

  5. …The philosopher Aristotle wrote in his Rhetoric:

    “Naturalness is persuasive, artificiality is the contrary; for our hearers are prejudiced and think we have some design against them, as if we were mixing their wines for them. It is like the difference between the quality of Theodorus’ voice and the voices of all other actors: his really seems to be that of the character who is speaking, theirs do not.”

    …”However, artificiality does not necessarily have a negative connotation, as it may also reflect the ability of humans to replicate forms or functions arising in nature, as with an artificial heart or artificial intelligence.”…wiki

    Thanks C I for your kind of human-writing-mapping, of quality designs for the who and the what in naturalness…

Leave a Reply

Your email address will not be published. Required fields are marked *