Posts tagged ‘Blue brain’

An interesting but somewhat problematic paper from the Blue Brain project claims that the application of topology to neural models has provided a missing link between structure and function. That’s exciting because that kind of missing link is just what we need to enable us to understand how the brain works.  The claim about the link is right there in the title, but unfortunately so far as I can see the paper itself really only attempts something more modest. It only seems to offer  a new exploration of some ground where future research might conceivably put one end of the missing link. There also seem to me to be some problems in the way we’re expected to interpret some of the findings  reported.

That may sound pretty negative. I should perhaps declare in advance that I know little neurology and less topology, so my opinion is not worth much. I also have form as a Blue Brain sceptic, so you can argue that I bring some stored-up negativity to anything associated with it. I’ve argued in the past that the basic goal of the project, of simulating a complete human brain is misconceived and wildly over-ambitious; not just a waste of money but possibly also a distraction which might suck resources and talent away from more promising avenues.

One of the best replies to that kind of scepticism is to say, well look; even if we don’t deliver the full brain simulation, the attempt will energise and focus our research in a way which will yield new and improved understanding. We’ll get a lot of good research out of it even if the goal turns out to be unattainable. The current paper, which demonstrates new mathematical techniques, might well be a good example of that kind of incidental pay off. There’s a nice explanation of the paper here, with links to some other coverage, though I think the original text is pretty well written and accessible.

As I understand it, topological approaches to neurology in the past have typically considered neural  networks as static objects. The new approach taken here adds the notion of directionality, as though each connection were a one-way street. This is more realistic for neurons. We can have groups of neurons where all are connected to all, but only one neuron provides a way into the group and one provides a way out; these are directed simplices. These simplices can be connected to others at their edges where, say, two of the member neurons are also members of a neighbouring simplex. Where there are a series of connected simplices, they may surround a void where nothing is going on. These cavities provide a higher level of structure, but I confess I’m not altogether clear as to why they are interesting. Holes, of course, are dear to the heart of any topologist, but in terms of function I’m not really clear about their relevance.

Anyway, there’s a lot in the paper but two things seem especially noteworthy. First, the researchers observed many more simplices, of much higher dimensionality, than could be expected from a random structure (they tested several such random structures put together according to different principles). ‘Dimensionality’ here just refers to how many neurons are involved; a simplex of higher dimensionality contains more neurons. Second they observed a characteristic pattern when the network was presented with a ‘stimulus’; simplices of gradually higher and higher dimensionality would appear and then finally disperse. This is not, I take it, a matter of the neurons actually wiring up new connections on the fly, it’s simply about which are involved actively by connections that are actually firing.

That’s interesting, but all of this so far was discovered in the Blue Brain simulated neurons, more or less those same tiny crumbs of computationally simulated rat cortex that were announced a couple of years ago. It is, of course, not safe to assume that real brain behaves in the same way; if we rely entirely on the simulation we could easily be chasing our tails. We would build the simulation to match our assumptions about the brain and then use the behaviour of the simulation to validate the same assumptions. In fact the researchers very properly tried to perform similar experiments with real rat cortex. This requires recording activity in a number of adjacent neurons, which is fantastically difficult to pull off, but to their credit they had some success; in fact the paper claims they confirmed the findings from the simulation. The problem is that while the simulated cortex was showing simplices of six or seven dimensions (even higher numbers are quoted in some of the media reports, up to eleven), the real rat cortex only managed three, with one case of four. Some of the discussion around this talks as though a result of three is partial confirmation of a result of six, but of course it isn’t. Putting it brutally, the team’s own results in real cortex contradicted what they had found in the simulation. Now, there could well be good reasons for that; notably they could only work with a tiny amount of real cortex. If you’re working with a dozen neurons at a time, there’s obviously quite a low ceiling on the complexity you can expect. But the results you got are the results you got, and I don’t see that there’s a good basis here for claiming that the finding of high-order simplices is supported in real brains. In fact what we have if anything is prima facie evidence that there’s something not quite right about the simulation. The researchers actually took a further step here by producing a simulation of the actual real neurons that they tested and then re-running the tests. Curiously, the simulated versions in these cases produced fewer simplices than the real neurons. The paper interprets this as supportive of its conclusions; if the real cortex was more productive of simplices, it argues, then we might expect big slabs of real brain to have even more simplices of even higher dimensionality than the remarkable results we got with the main simulation. I don’t think that kind of extrapolation is admissible; what you really got was another result showing that your simulations do not behave like the real thing. In fact, if a simulation of only twelve neurons behaves differently from the real thing in significant respects, that surely must indicate that the simulation isn’t reproducing the real thing very well?

The researchers also looked at the celebrated roundworm C. Elegans, the only organism whose neural map (or connectome) is known in full, and apparently found evidence of high-order simplices – though I think it can only have been a potential for such simplices, since they don’t seem to have performed real or simulated experiments, merely analysing the connectome.

Putting all that aside, and supposing we accept the paper’s own interpretations, the next natural question is: so what? It’s interesting that neurons group and fire in this way, but what does that tell us about how the brain actually functions? There’s a suggestion that the pattern of moving up to higher order simplices represents processing of a sensory input, but in what way? In functional terms, we’d like the processing of a stimulus to lead on to action, or perhaps to the recording of a memory trace, but here we just seem to see some neurons get excited and then stop being excited. Looking at it in simple terms, simplices seem really bad candidates for any functional role, because in the end all they do is deliver the same output signal as a single neural connection would do. Couldn’t you look at the whole thing with a sceptical eye and say that all the researchers have found is that a persistent signal through a large group of neurons gradually finds an increasing number of parallel paths?

At the end of the paper we get some speculation that addresses this functional question directly. The suggestion is that active high-dimensional simplices might be representing features of the stimulus, while the grouping around cavities binds together different features to represent the whole thing. It is, if sketchy, a tenable speculation, but quite how this would amount to representation remains unclear. There are probably other interesting ways you might try to build mental functions on the basis of escalating simplices, and there could be more to come in that direction. For now though, it may give us interesting techniques, but I don’t think the paper really delivers on its promise of a link with function.

brainsimAeon Magazine has published my Opinion piece on brain simulation. Go on over there and comment! Why not like me while you’re at it!!!

I’m sorry about that outburst – I got a little over-excited…

Coming soon (here) Babbage’s forgotten rival…

Henry MarkramThe brain is not a gland. But Henry Markram seems to be in danger of simulating one – a kind of electric gland.

What am I on about? The Blue Brain Project has published details of its most ambitious simulation yet; a computer model of a tiny sliver of rat brain. That may not sound exciting on the face of it, but the level of detail is unprecedented…

The reconstruction uses cellular and synaptic organizing principles to algorithmically reconstruct detailed anatomy and physiology from sparse experimental data. An objective anatomical method defines a neocortical volume of 0.29 ± 0.01 mm3 containing ~31,000 neurons, and patch-clamp studies identify 55 layer-specific morphological and 207 morpho-electrical neuron subtypes. When digitally reconstructed neurons are positioned in the volume and synapse formation is restricted to biological bouton densities and numbers of synapses per connection, their overlapping arbors form ~8 million connections with ~37 million synapses.

The results are good. Without parameter tuning – that is, without artificial adjustments to make it work the way it should work – the digital simulation produces patterns of electrical activity that resemble those of real slivers of rat brain. The paper is accessible here. It seems a significant achievement and certainly attracted a lot of generally positive attention – but there are some significant problems. The first is that the methodological issues which were always evident remain unresolved. The second is certain major weaknesses in the simulation itself. The third is that as a result of these weaknesses the simulation implicitly commits Markram to some odd claims, ones he probably doesn’t want to make.

First, the methodology. The simulation is claimed as a success, but how do we know? If we’re simulating a heart, then it’s fairly clear it needs to pump blood. If we’re simulating a gland, it needs to secrete certain substances. The brain? It’s a little more complicated. Markram seems implicitly to take the view that the role of brain tissue is to generate certain kinds of electrical activity; not particular functional outputs, just generic activity of certain kinds.

One danger with that is a kind of circularity. Markram decides the brain works a certain way, he builds a simulation that works like that, and triumphantly shows us that his simulation does indeed work that way. Vindication! It could be that he is simply ignoring the most important things about neural tissue, things that he ought to be discovering. Instead he might just be cycling round in confirmation of his own initial expectations. One of the big dangers of the Blue Brain project is that it might entrench existing prejudices about how the brain works and stop a whole generation from thinking more imaginatively about new ideas.

The Blue simulation produces certain patterns of electrical activity that look like those of real rat brain tissue – but only in general terms. Are the actual details important? After all, a string of random characters with certain formal constraints looks just like meaningful text, or valid code, or useful stretches of DNA, but is in fact useless gibberish. Putting in constraints which structure the random text a little and provide a degree of realism is a relatively minor task; getting output that’s meaningful is the challenge. It looks awfully likely that the Blue simulation has done the former rather than the latter, and to be brutal that’s not very interesting. At worst it could be like simulating an automobile whose engine noise is beautifully realistic but never moves. We might well think that the project is falling into the trap I mentioned last time: mistaking information about the brain for the information in the brain.

Now it could be that actually the simulation is working better than that; perhaps it isn’t as generic as it seems, perhaps this particular bit of rat brain works somewhat generically anyway; or perhaps somehow in situ the tissue trains or conditions itself, saving the project most of the really difficult work. The final answer to the objections above might be if the simulation could be plugged back into a living rat brain and the rat behaviour shown to continue properly. If we could do that it would sidestep the difficult questions about how the brain operates; if the rat behaves normally, then even though we still don’t know why, we know we’re doing it right. In practice it doesn’t seem very likely that that would work, however: the brain is surely about producing specific control outputs, not about glandularly secreting generic electrical activity.

A second set of issues relates to the limitations of the simulation. Several of the significant factors I mentioned above have been left out; notably there are no astrocytes and no neurotransmitters. The latter is a particularly surprising omission because Markram himself has in the past done significant work on trying to clarify this area in the past. The fact that the project has chosen to showcase a simulation without them must give rise to a suspicion that its ambitions are being curtailed by the daunting reality; that there might even be a dawning realisation internally that what has been bitten off here really is far beyond chewing and the need to deliver has to trump realism. That would be a worrying straw in the wond so far as the project’s future is concerned.

In addition, while the simulation reproduces a large number of different types of neuron, the actual wiring has been determined by an algorithm. A lot depends on this: if the algorithm generates useful and meaningful connections then it is itself a master-work beside which the actual simulation is trivial. If not, then we’re back with the question of whether generic kinds of connection are really good enough. They may produce convincing generic activity, and maybe that’s even good enough for certain areas of rat brain, but we can be pretty sure it isn’t good enough for brain activity generally.

Harking back for a moment to methodology, there’s still something odd in any case about trying to simulate a process you don’t understand. Any simulation reproduces certain features of the original and leaves others out. The choice is normally determined by a thesis about how and why the thing works: that thesis allows you to say which features are functionally necessary and which are irrelevant. Your simulation only models the essential features and its success therefore confirms your view about what really matters and how it all operates. Markram, though, is not starting with an explicit thesis. One consequence is that it is hard to tell whether his simulation is a success or not, because he didn’t tell us clearly enough in advance what it was he was trying to make happen. What we can do is read off the implicit assumptions that the project cannot help embodying in its simulation. To hold up the simulation as a success is to make the implicit claim that the brain is basically an electrical network machine whose essential function is to generate certain general types of neural activity. It implicitly affirms that the features left out of the simulation – notably the vast array and complex role of neural transmitters and receptors – are essentially irrelevant. That is a remarkable claim, quite unlikely, and I don’t think it’s one Markram really wants to make. But if he doesn’t, consistency obliges him to downplay the current simulation rather than foreground it.

To be fair, the simulation is not exactly being held up as a success in those terms. Markram describes it as a first draft. That’s fair enough up to a point (except that you don’t publish first drafts), but if our first step towards a machine that wrote novels was one that generated the Library of Babel (every possible combination of alphabetic characters plus spaces) we might doubt whether we were going in the right direction. The Blue Brain project in some ways embodies technological impatience; let’s get on and build it and worry about the theory later. The charge is that as a result the project is spending its time simulating the wrong features and distracting attention from the more difficult task of getting a real theoretical understanding; that it is making an electric gland instead of a brain.

Picture: ephaptic consciousness. The way the brain works is more complex than we thought. That’s a conclusion that several pieces of research over recent years have suggested for one reason or another: but some particularly interesting conclusions are reported in a paper in Nature Neuroscience (Anastassiou, Perin, Markram, and Koch). It has generally been the assumption that neurons are effectively isolated, interacting only at synapses: it was known that they could be influenced by each other’s electric fields, but it was generally thought that given the typically tiny fields involved, these effects could be disregarded. The only known exceptions of any significance were in certain cases where unusually large fields could induce ‘ephaptic coupling ‘ interfering with the normal working of neurons and cause problems.

Given the microscopic sizes involved and the weakness of the fields, measuring the actual influence of ephaptic effects is difficult, but for the series of experiments reported here a method was devised using up to twelve electrodes for a single neuron. It was found that extracellular fluctuation did produce effects within the neuron, at the minuscule level expected: however, although the effects were too small to produce any immediate additional action potentials, induced fluctuations in one neuron did influence neighbouring cells, producing a synchronisation of spike timing. In short, it turns out that neurons can influence each other and synchronise themselves through a mechanism completely independent of synapses.

So what? Well, first this may suggest that we have been missing an important part of the way the brain functions. That has obvious implications for brain simulations, and curiously enough, one of the names on the paper (he helped with the writing) is that of Henry Markram, leader of the most ambitious brain simulation project of all,Blue Brain.  Things seem to have gone quiet on that project since completion of ‘phase one’; I suppose it is awaiting either more funding or the advances in technology which Markram foresaw as the route to a total brain simulation. In the meantime it seems the new research shows that like all simulations to date Blue Brain was built on an incomplete picture, and as it stood was doomed to ultimate failure.

I suppose, in the second place, there may be implications for connectionism. I don’t think neural networks are meant to be precise brain simulations, but the suggestion that a key mechanism has been missing from our understanding of the brain might at least suggest that a new line of research, building in an equivalent mechanism to connectionist systems could yield interesting results.

But third and most remarkable, this must give a big boost to those who have suggested that consciousness resides in the brain’s electrical field: Sue Pockett, for one, but above all JohnJoe McFadden, who back in 2002 declared that the effects of the brain’s endogenous electromagnetic fields deserved more attention. Citing earlier studies which had shown modulation of neuron firing by very weak fields, he concluded:

By whatever mechanism, it is clear that very weak em field fluctuations are capable of modulating neurone-firing patterns. These exogenous fields are weaker than the perturbations in the brain’s endogenous em field that are induced
during normal neuronal activity. The conclusion is inescapable: the brain’s endogenous em field must influence neuronal information processing in the brain.

We may still hold back from agreeing that consciousness is to be identified with an electromagnetic field, but he certainly seems to have been ahead of the game on this.