Posts tagged ‘Henry Markram’

An interesting but somewhat problematic paper from the Blue Brain project claims that the application of topology to neural models has provided a missing link between structure and function. That’s exciting because that kind of missing link is just what we need to enable us to understand how the brain works.  The claim about the link is right there in the title, but unfortunately so far as I can see the paper itself really only attempts something more modest. It only seems to offer  a new exploration of some ground where future research might conceivably put one end of the missing link. There also seem to me to be some problems in the way we’re expected to interpret some of the findings  reported.

That may sound pretty negative. I should perhaps declare in advance that I know little neurology and less topology, so my opinion is not worth much. I also have form as a Blue Brain sceptic, so you can argue that I bring some stored-up negativity to anything associated with it. I’ve argued in the past that the basic goal of the project, of simulating a complete human brain is misconceived and wildly over-ambitious; not just a waste of money but possibly also a distraction which might suck resources and talent away from more promising avenues.

One of the best replies to that kind of scepticism is to say, well look; even if we don’t deliver the full brain simulation, the attempt will energise and focus our research in a way which will yield new and improved understanding. We’ll get a lot of good research out of it even if the goal turns out to be unattainable. The current paper, which demonstrates new mathematical techniques, might well be a good example of that kind of incidental pay off. There’s a nice explanation of the paper here, with links to some other coverage, though I think the original text is pretty well written and accessible.

As I understand it, topological approaches to neurology in the past have typically considered neural  networks as static objects. The new approach taken here adds the notion of directionality, as though each connection were a one-way street. This is more realistic for neurons. We can have groups of neurons where all are connected to all, but only one neuron provides a way into the group and one provides a way out; these are directed simplices. These simplices can be connected to others at their edges where, say, two of the member neurons are also members of a neighbouring simplex. Where there are a series of connected simplices, they may surround a void where nothing is going on. These cavities provide a higher level of structure, but I confess I’m not altogether clear as to why they are interesting. Holes, of course, are dear to the heart of any topologist, but in terms of function I’m not really clear about their relevance.

Anyway, there’s a lot in the paper but two things seem especially noteworthy. First, the researchers observed many more simplices, of much higher dimensionality, than could be expected from a random structure (they tested several such random structures put together according to different principles). ‘Dimensionality’ here just refers to how many neurons are involved; a simplex of higher dimensionality contains more neurons. Second they observed a characteristic pattern when the network was presented with a ‘stimulus’; simplices of gradually higher and higher dimensionality would appear and then finally disperse. This is not, I take it, a matter of the neurons actually wiring up new connections on the fly, it’s simply about which are involved actively by connections that are actually firing.

That’s interesting, but all of this so far was discovered in the Blue Brain simulated neurons, more or less those same tiny crumbs of computationally simulated rat cortex that were announced a couple of years ago. It is, of course, not safe to assume that real brain behaves in the same way; if we rely entirely on the simulation we could easily be chasing our tails. We would build the simulation to match our assumptions about the brain and then use the behaviour of the simulation to validate the same assumptions. In fact the researchers very properly tried to perform similar experiments with real rat cortex. This requires recording activity in a number of adjacent neurons, which is fantastically difficult to pull off, but to their credit they had some success; in fact the paper claims they confirmed the findings from the simulation. The problem is that while the simulated cortex was showing simplices of six or seven dimensions (even higher numbers are quoted in some of the media reports, up to eleven), the real rat cortex only managed three, with one case of four. Some of the discussion around this talks as though a result of three is partial confirmation of a result of six, but of course it isn’t. Putting it brutally, the team’s own results in real cortex contradicted what they had found in the simulation. Now, there could well be good reasons for that; notably they could only work with a tiny amount of real cortex. If you’re working with a dozen neurons at a time, there’s obviously quite a low ceiling on the complexity you can expect. But the results you got are the results you got, and I don’t see that there’s a good basis here for claiming that the finding of high-order simplices is supported in real brains. In fact what we have if anything is prima facie evidence that there’s something not quite right about the simulation. The researchers actually took a further step here by producing a simulation of the actual real neurons that they tested and then re-running the tests. Curiously, the simulated versions in these cases produced fewer simplices than the real neurons. The paper interprets this as supportive of its conclusions; if the real cortex was more productive of simplices, it argues, then we might expect big slabs of real brain to have even more simplices of even higher dimensionality than the remarkable results we got with the main simulation. I don’t think that kind of extrapolation is admissible; what you really got was another result showing that your simulations do not behave like the real thing. In fact, if a simulation of only twelve neurons behaves differently from the real thing in significant respects, that surely must indicate that the simulation isn’t reproducing the real thing very well?

The researchers also looked at the celebrated roundworm C. Elegans, the only organism whose neural map (or connectome) is known in full, and apparently found evidence of high-order simplices – though I think it can only have been a potential for such simplices, since they don’t seem to have performed real or simulated experiments, merely analysing the connectome.

Putting all that aside, and supposing we accept the paper’s own interpretations, the next natural question is: so what? It’s interesting that neurons group and fire in this way, but what does that tell us about how the brain actually functions? There’s a suggestion that the pattern of moving up to higher order simplices represents processing of a sensory input, but in what way? In functional terms, we’d like the processing of a stimulus to lead on to action, or perhaps to the recording of a memory trace, but here we just seem to see some neurons get excited and then stop being excited. Looking at it in simple terms, simplices seem really bad candidates for any functional role, because in the end all they do is deliver the same output signal as a single neural connection would do. Couldn’t you look at the whole thing with a sceptical eye and say that all the researchers have found is that a persistent signal through a large group of neurons gradually finds an increasing number of parallel paths?

At the end of the paper we get some speculation that addresses this functional question directly. The suggestion is that active high-dimensional simplices might be representing features of the stimulus, while the grouping around cavities binds together different features to represent the whole thing. It is, if sketchy, a tenable speculation, but quite how this would amount to representation remains unclear. There are probably other interesting ways you might try to build mental functions on the basis of escalating simplices, and there could be more to come in that direction. For now though, it may give us interesting techniques, but I don’t think the paper really delivers on its promise of a link with function.

Are connectomes the future?  Although the derivation of the word “connectome” makes no sense – as I understand it the “-ome” bit is copied from “genome”, which in turn was copied from “chromosome”, losing a crucial ‘s’ in the process* – it was coined simultaneously but separately by Olaf Sporns and Patric Hagmann, so it is clearly a word whose time to emerge has come.

It means a functionally coherent set of neural connections, or a map of the same. This may be the entire set of connections in a brain or a nervous system, but it may also be a smaller set which link and work together.  There is quite a lot going on in this respect: the Human Connectome Project is preparing to move into its second, data-gathering phase; there’s also the (more modest or perhaps more realistic) Mouse Connectome Project.  One complete connectome, that for the worm Caenorhabditis elegans, already exists (in fact I think it existed before the word “connectome”) and is often mentioned. The Open Connectome Project has a great deal of information about this and much besides.

The idea of the connectome was given a new twist by Sebastian Seung in his TED talk “I Am My Connectome”, and he has now published a book called (guess what) Connectome. In that he gently and thoughtfully backs away a bit from the unqualified claim that personal identity is situated in the connectome of the whole brain. It’s a useful book which falls into three parts:  a lucid exposition of the neural structure of the brain, some discussion and proposals on connectomic investigation; and some more fanciful speculation, examined seriously but without losing touch with common sense. Seung touches briefly on the spat between Henry Markram and Dharmendra Modha: Markram’s Blue Brain project, you may recall, aims to simulate an entire brain, and he was infuriated by Modha’s claim to have simulated a cat brain on the basis of a far less detailed approach (Markram’s project seeks to model the complex behaviour of real neurons: Modha’s treated them as standard nodes). Seung is quite supportive of these simulations, but I thought his discussion of the very large difficulties involved and the simplifications inherent even in Markram’s scrupulous approach was implicitly devastating.

What should we make of all this connectome stuff? In practical terms the emergence of the term “connectome” adds nothing much to our conceptual armoury: we could and did talk about neural networks anyway. It’s more that it represents a new surge of confidence that neurological approaches can shoulder aside the psychologists, the programmers, and the philosophers and finally get the study of the human mind moving forward on a scientific basis. To a large extent this confidence springs from technical advances which mean it has finally begun to seem reasonable to talk about drawing up a detailed wiring diagram of sets of neurons.

Curiously though, the term also betrays an unexpected lack of confidence. The deliberate choice of a word which resembles one from genetics and recalls the Human Genome Project clearly indicates an envy of that successful field and a desire to emulate it. This is not the way thrusting, successful fields of study behave; the connectonauts seem to be embarking on their explorations without having shed that slightly resentful feeling of being the junior cousin. Perhaps it’s just that like most of us they are slightly frightened of Richard Dawkins. However, it could also be a well-founded sense that they are getting into something which is likely to turn out complicated in ways that no-one could have foreseen.

One potential source of difficulty lies in the fact that looking for connectomes tends to imply a commitment to modularity.  The modularity (or otherwise) of mind has been extensively discussed by philosophers and psychologists, and neurologists have come up with pretty strong evidence that localisation of many functions is a salient feature of the brain: but there is a risk that the modules devised by evolution don’t match the ones we expect to find, and hence are difficult to recognise or interpret; and worse, it’s quite possible that important functions are not modularised at all, but carried out by heterogeneous and variable sets of neurons distributed over a wide area. If so, looking for coherent connectomes might be a bad way of approaching the brain.

In this respect we may be prey to misconceiving the brain through thinking of it as though it were an artefact. Human-designed machines need to have an intelligible structure so that they can be constructed and repaired easily; and for complex systems modularisation is best practice. A complex machine is put together out of replaceable sub-systems that perform discrete tasks; good code is structured to maximise reusability and intelligibility.  But Nature doesn’t have to work like that: evolution might find tangled systems that work fine and actually generate lower overheads.

That might be so, but when we look at animal biology the modularisation is actually pretty striking: the internal organs of a human being, say, are structured in a way that bears a definite resemblance to the components of a machine. Evolution never had to take account of the possibility of replacement parts, but (immune system aside) in fact our internal organisation facilitates transplant surgery much more than it need have done.

Why is that? I’d suggest that there is a secondary principle of evolution at work. When evolution is (so to speak) devising a creature for a new ecological niche, it doesn’t actually start from scratch: it modifies one of the organisms already to hand. Just as a designer finds it easier to build a new machine out of existing parts, a well-modularised creature is more likely to give rise to variant descendants that work in new roles. So besides fitness to survive, we have fitness to give rise to new descendant species; and modularisation enhances that second-order kind of fitness.  Lots of weird creatures that worked well back in the Cambrian did not lend themselves easily to redesign, and hence have no living descendant species, whereas some creature with a backbone, four limbs with five digits each and a tail, proved to be a fertile source of useable variation: leave out some digits, a pair of limbs or the tail; put big legs on the back and small ones on the front, and straightaway you’ve got a viable new modus operandi. In the same way a creature that bolted on an appendix to its gut might be more ready to produce descendants without the appendix function than one which had reconditioned the function of its whole system (I’m getting well out of my depth here). In short, maybe there is an evolutionary tendency to modularisation after all, so it is reasonable to look for connectomes.  As a further argument, we may note that it would seem to make sense in general for neurons that interact a lot to be close together, forming natural connectomes, though given the promiscuous connectivity of the brain some caution about that may be appropriate.

Anyway, what we care about here is consciousness, so the question for us must be: is there a consciousness connectome? In one sense, of course, there must be (and here we stub our toe on another potential danger of the connectome approach): if we just go on listing all the neurons that play a part in consciousness we will at some point have a full set.  But that might end up being the entire brain: what we want to know is whether there is a coherent self-contained module or set of modules supporting consciousness. Things we might be looking out for would surely include a Global Workspace connectome, and I think perhaps a Higher Order Thought Connectome: either might be relatively clearly identifiable on the basis of their pattern of connections.

I don’t think we’re in any position to say yet, but as a speculation I would guess that in fact there is a set of connectomes that have to act together to support a set  of interlocking functions making up consciousness:  sub-conscious thought,  awareness/input, conscious reflection, emotional tone, and so on. I’m not suggesting by any means that that is the actual list; rather I think it is likely that connectome research might cause us to rethink our categories just as research is already causing us to stop thinking that memory (as we had always supposed)  is a single function.

There is already some sign that connectomes might carve up the brain in ways that don’t match our existing ways of thinking about it:  Martijn van den Heuvel and Olaf Sporns have published a paper which seems to show that there are twelve sites of special interest where interconnections are especially dense: they call this a “rich club”, but I think the functional implications of these twelve special zones remain tantalisingly obscure for the moment.

In the end my guess is that by about 2040 we shall look back on the connectome as a paradigm that turned out to be inadequate to the full complexity of the brain, but one which inspired research essential to a much improved understanding.

*I do realise BTW that words are under no obligation to mean what the Latin or Greek they were derived from suggests – “chromosome” would mean “colour body” which is a trifle opaque to say the least.