Markram’s Electric Gland

Henry MarkramThe brain is not a gland. But Henry Markram seems to be in danger of simulating one – a kind of electric gland.

What am I on about? The Blue Brain Project has published details of its most ambitious simulation yet; a computer model of a tiny sliver of rat brain. That may not sound exciting on the face of it, but the level of detail is unprecedented…

The reconstruction uses cellular and synaptic organizing principles to algorithmically reconstruct detailed anatomy and physiology from sparse experimental data. An objective anatomical method defines a neocortical volume of 0.29 ± 0.01 mm3 containing ~31,000 neurons, and patch-clamp studies identify 55 layer-specific morphological and 207 morpho-electrical neuron subtypes. When digitally reconstructed neurons are positioned in the volume and synapse formation is restricted to biological bouton densities and numbers of synapses per connection, their overlapping arbors form ~8 million connections with ~37 million synapses.

The results are good. Without parameter tuning – that is, without artificial adjustments to make it work the way it should work – the digital simulation produces patterns of electrical activity that resemble those of real slivers of rat brain. The paper is accessible here. It seems a significant achievement and certainly attracted a lot of generally positive attention – but there are some significant problems. The first is that the methodological issues which were always evident remain unresolved. The second is certain major weaknesses in the simulation itself. The third is that as a result of these weaknesses the simulation implicitly commits Markram to some odd claims, ones he probably doesn’t want to make.

First, the methodology. The simulation is claimed as a success, but how do we know? If we’re simulating a heart, then it’s fairly clear it needs to pump blood. If we’re simulating a gland, it needs to secrete certain substances. The brain? It’s a little more complicated. Markram seems implicitly to take the view that the role of brain tissue is to generate certain kinds of electrical activity; not particular functional outputs, just generic activity of certain kinds.

One danger with that is a kind of circularity. Markram decides the brain works a certain way, he builds a simulation that works like that, and triumphantly shows us that his simulation does indeed work that way. Vindication! It could be that he is simply ignoring the most important things about neural tissue, things that he ought to be discovering. Instead he might just be cycling round in confirmation of his own initial expectations. One of the big dangers of the Blue Brain project is that it might entrench existing prejudices about how the brain works and stop a whole generation from thinking more imaginatively about new ideas.

The Blue simulation produces certain patterns of electrical activity that look like those of real rat brain tissue – but only in general terms. Are the actual details important? After all, a string of random characters with certain formal constraints looks just like meaningful text, or valid code, or useful stretches of DNA, but is in fact useless gibberish. Putting in constraints which structure the random text a little and provide a degree of realism is a relatively minor task; getting output that’s meaningful is the challenge. It looks awfully likely that the Blue simulation has done the former rather than the latter, and to be brutal that’s not very interesting. At worst it could be like simulating an automobile whose engine noise is beautifully realistic but never moves. We might well think that the project is falling into the trap I mentioned last time: mistaking information about the brain for the information in the brain.

Now it could be that actually the simulation is working better than that; perhaps it isn’t as generic as it seems, perhaps this particular bit of rat brain works somewhat generically anyway; or perhaps somehow in situ the tissue trains or conditions itself, saving the project most of the really difficult work. The final answer to the objections above might be if the simulation could be plugged back into a living rat brain and the rat behaviour shown to continue properly. If we could do that it would sidestep the difficult questions about how the brain operates; if the rat behaves normally, then even though we still don’t know why, we know we’re doing it right. In practice it doesn’t seem very likely that that would work, however: the brain is surely about producing specific control outputs, not about glandularly secreting generic electrical activity.

A second set of issues relates to the limitations of the simulation. Several of the significant factors I mentioned above have been left out; notably there are no astrocytes and no neurotransmitters. The latter is a particularly surprising omission because Markram himself has in the past done significant work on trying to clarify this area in the past. The fact that the project has chosen to showcase a simulation without them must give rise to a suspicion that its ambitions are being curtailed by the daunting reality; that there might even be a dawning realisation internally that what has been bitten off here really is far beyond chewing and the need to deliver has to trump realism. That would be a worrying straw in the wond so far as the project’s future is concerned.

In addition, while the simulation reproduces a large number of different types of neuron, the actual wiring has been determined by an algorithm. A lot depends on this: if the algorithm generates useful and meaningful connections then it is itself a master-work beside which the actual simulation is trivial. If not, then we’re back with the question of whether generic kinds of connection are really good enough. They may produce convincing generic activity, and maybe that’s even good enough for certain areas of rat brain, but we can be pretty sure it isn’t good enough for brain activity generally.

Harking back for a moment to methodology, there’s still something odd in any case about trying to simulate a process you don’t understand. Any simulation reproduces certain features of the original and leaves others out. The choice is normally determined by a thesis about how and why the thing works: that thesis allows you to say which features are functionally necessary and which are irrelevant. Your simulation only models the essential features and its success therefore confirms your view about what really matters and how it all operates. Markram, though, is not starting with an explicit thesis. One consequence is that it is hard to tell whether his simulation is a success or not, because he didn’t tell us clearly enough in advance what it was he was trying to make happen. What we can do is read off the implicit assumptions that the project cannot help embodying in its simulation. To hold up the simulation as a success is to make the implicit claim that the brain is basically an electrical network machine whose essential function is to generate certain general types of neural activity. It implicitly affirms that the features left out of the simulation – notably the vast array and complex role of neural transmitters and receptors – are essentially irrelevant. That is a remarkable claim, quite unlikely, and I don’t think it’s one Markram really wants to make. But if he doesn’t, consistency obliges him to downplay the current simulation rather than foreground it.

To be fair, the simulation is not exactly being held up as a success in those terms. Markram describes it as a first draft. That’s fair enough up to a point (except that you don’t publish first drafts), but if our first step towards a machine that wrote novels was one that generated the Library of Babel (every possible combination of alphabetic characters plus spaces) we might doubt whether we were going in the right direction. The Blue Brain project in some ways embodies technological impatience; let’s get on and build it and worry about the theory later. The charge is that as a result the project is spending its time simulating the wrong features and distracting attention from the more difficult task of getting a real theoretical understanding; that it is making an electric gland instead of a brain.

18 thoughts on “Markram’s Electric Gland

  1. You can simulate a rainstorm to an incredibly high degree of accuracy inside a computer.

    You won’t get wet though.

    You could synthesize a load of rain cloud though, using silver iodide – then you would get wet. But that is not simulation – that is physical replication.

    You don’t “simulate” a brain by duplicating the physical properties of a brain – that is replication, not simulation. That would not be a computational simulation.

    In the unlikely event that minds are caused solely by certain forms of electrical activity, reproducing those effects has – of course – nothing to do with computers. It is a replication of the physical factors that (admittedly unlikely) play a role in causing minds.

  2. Peter, you wrote:
    “a string of random characters with certain formal constraints looks just like meaningful text, or valid code, or useful stretches of DNA, but is in fact useless gibberish. Putting in constraints which structure the random text a little and provide a degree of realism is a relatively minor task; getting output that’s meaningful is the challenge.”

    This is a very strong counterargument to the usefulness of the Blue Brain project. Their “readout” is really arbitrary.

    The C. elegans people did something a lot more impressive (in my opinion) which was to show that they could replicate a neuronal pattern in silico which was sufficient to replicate the undulating movement of the worm.

    I think the blue brain people should start by trying to show that emulated slices of the visual cortex can perform the same types of computation that their real counterparts can. That would be much stronger evidence of what they want.

    John Davey, you wrote:
    “You can simulate a rainstorm to an incredibly high degree of accuracy inside a computer. You won’t get wet though.”

    This argument is really annoying to me, and it feels like a straw-man. Why would something inside a simulation change something outside of it? A more interesting thought experiment is to ask- to what degree of accuracy would I need to simulate a storm to convince someone that they are inside a storm? The property of wetness might be completely real within a simulation, if it changes the way in which you interact with other parameters in the simulation.

  3. Except that we can reconstruct what an individual is viewing, for example, in their dreams using fMRI (Horikawa et al 2013]. We also know that fMRI BOLD signals do correlate with neuronal activation in those regions. So there are some tests we can apply to the question of whether simulated neuronal activity is mimicking the right kind of activity.

  4. Peter this seems unjustifiably pessimistic, like Eeyore wrote the review rather than you.

    The criticisms just seem off, for instance:
    “In addition, while the simulation reproduces a large number of different types of neuron, the actual wiring has been determined by an algorithm. A lot depends on this: if the algorithm generates useful and meaningful connections then it is itself a master-work beside which the actual simulation is trivial. If not, then we’re back with the question of whether generic kinds of connection are really good enough. They may produce convincing generic activity, and maybe that’s even good enough for certain areas of rat brain, but we can be pretty sure it isn’t good enough for brain activity generally.”

    This seems mistaken. What we want are data-driven algorithms that describe statistical patterns of connectivity for particular regions of the brain, and we wouldn’t necessarily expect them to work for brain areas generally. But if the techniques for this particular piece of the rat brain work, then why not for other parts of other species’ brains? We always start by working with model systems in biology, working locally on tractable problems, and then try to scale up.

    “A second set of issues relates to the limitations of the simulation. Several of the significant factors I mentioned above have been left out; notably there are no astrocytes and no neurotransmitters. The latter is a particularly surprising omission because Markram himself has in the past done significant work on trying to clarify this area in the past. The fact that the project has chosen to showcase a simulation without them must give rise to a suspicion that its ambitions are being curtailed by the daunting reality; that there might even be a dawning realisation internally that what has been bitten off here really is far beyond chewing and the need to deliver has to trump realism. ”

    Every model will leave out some details. That isn’t to say such details are irrelevant but that the model is glossing over them for now, and may need to add them later. For neurotransmitters, they do include things like neutransmitter release probability, and EPSP, IPSP shapes determined from real data and the best models available, and excitatory/inhibitory neurons which implicitly contain different neurotransmitters. At any rate, these are things they can add.

    What they do have is more detail than is typically included in these types of models, and it is nontrivial that they get reasonable-looking ‘biological’ behavior without doing a lot of parameter fitting.

    I see this as an important first step in a many-decade long process. What is the Hodgkin/Huxely have provided us with the most powerful, useful singular success story in computational neuroscience, it drives pretty much all research (justifiably), and people are finally pushing it in a larger-scale direction incorporating a great deal of biological realism into the models (not every last detail, of course, but more than most).

    The interplay b/w experiment and model that will come of this should be very interesting. I find this work exciting, inspiring, and right-headed. I know a lot of Europeans have balked at it, but I looked over their criticisms and frankly found them very weak as scientific criticisms. Perhaps concerns about this project getting too much funding could be on target, but this was funded many years ago it would be a mistake to pull out the rug now (I’m thinking of the superconducting supercollider fiasco in the US cancelled in 1993 by short-sighted bureaucrats).

    Scientists are often very local and small in their thinking, e.g., looking at the function of one amino acid in one sodium channel. we also need people who are expansive and “big” in their thinking, and neuroscience in particular begs for some big science. I’m loving me some Blue Brain. 🙂

  5. Of course I already noticed something to clarify:
    “But if the techniques for this particular piece of the rat brain work, then why not for other parts of other species’ brains? ”

    By this I meant the techniques used to generate the algorithms for particular parts of particular brains could be generalizable, even if the particular algorithm for brain region A cannot be used for brain region B. We could use similar techniques for getting the algorithms for these different regions, knowing full well the connectivity-generating algorithms will be different.

  6. Eric,

    Sorry to be slow responding. The point is that the brain is not like a gland which merely has to produce more of the same kind of stuff. The brain has to produce specific, correct control output. Given that, similar ‘statistical patterns of connectivity’ are no good at all. It’s not that they’re nearly right or a good working approximation: they are no good at all, successful only in ways that are irrelevant. You wouldn’t get into a self-driving car whose programming ran code that was only statistically similar to the correct code. A brain simulation that produces activity statistically like that of a real one is not really any better than a brain simulation that just looks like a brain.

    You rightly say that every model will leave out some details – yes, but ones that don’t matter, not essential features. Is Markram claiming a successful simulation or not? If he is, then he cannot avoid claiming that the things he left out are irrelevant to the way the brain functions, which is quite a claim. If he’s not claiming a successful simulation, what’s the fuss for? On this he really wants to have his cake and eat it – he wants the acclaim of having achieved an unprecedented simulation at the same time as the immunity from attack he thinks he can get from saying it’s only ‘a first draft’. I repeat, you don’t publish first drafts.

    On that I thought I was being quite relaxed about letting him nibble the cake a bit, but really if he won’t say whether the simulation really works or not his cake is half-baked.

  7. Peter I think you are being a bit too black and white about this in a way that shows an unrealistic view of how modelling works.

    Even missiles, if they don’t blow up their target, can be further or closer to the target. Blue Brain is getting us closer to our target, even if there has been no explosion yet.

    Whether the simulation is ‘successful or not’ is another strange binary way of looking at this. Sure, no explosions, but it is successful in some ways, and likely not successful in other ways. I’m not sure what you expect here. It seems an incremental step toward something where we have realistic expectations of a bigger payoff in the future, a payoff that is impossible to predict in very specific terms right now.

    In experimental neuroscience we look at real neuronal data in gross statistical terms. We actually would expect modellers to reproduce such statistical regularities we see in our data. This is standard practice. Rarely do modellers reproduce every spike down to ms precision. Rarely do brains even do this! This is not some idiosyncratic feature of Blue Brain, but how modelling complex neural systems works in general (e.g., look at Izhikevich’s large-scale model from PNAS a few years back).

    Is this paper being excessively hyped or something?

  8. I think Peter is saying that if the scientific question is A and you expend mega resources modeling B, then what you might call success is really a non-sequitur.

  9. Yes, that’s right.

    If your model neurons don’t do information processing, what do are you saying neurons are for? It surely can’t be that they merely need to produce the right kind of statistical noise. That would be like producing a novel-writing program and claiming success when you generated random word strings. You’re not on an incremental path towards the goal, you haven’t really even started. Markram might say it’s just some useful work on a substrate for a future simulation – fine but not exciting.

    At the risk of confusing things by introducing a further point – isn’t it rather likely that the statistical patterns which are hailed as success here derive directly from the algorithm used to wire up the simulation? If that’s true Markram’s merely reading off patterns he built in, and the claim about parameter tuning looks rather hollow.

  10. The whole thing seems to suggest that one day the Blue Brain will complete and we’ll be presented with a full simulation of a human brain. Markram will assure us that its patterns of activity look just like those of a real brain.

    Q – So what is it thinking?
    Markram – I don’t know.
    Q – Can we communicate with it?
    M – No.
    Q – Can it control a robot?
    M – No.
    Q – How does the mind work?
    M – I haven’t the faintest idea. But look at the exquisite accuracy of this simulation!

  11. Pingback: Week Links: 18 October 2015 | Sakeel

  12. Producing similar statistical properties to real brains is not trivial, and it’s not noise. For instance, it was a major accomplishment to recreate statistical properties associated with sleep-wakefulness (e.g., levels of synchrony/asynchrony) in biophysically realistic models. Sure, they may not model everything (e.g., sensory processing), but that would be a strange thing to expect of any model.

  13. Perhaps I am being a bit jaundiced – though I don’t think a few more vigorous critiques would do the project any harm!

  14. Vigorous criticism is great: I’m just not seeing them hitting the target convincingly, contrary to the usual fare here. 🙂

    Putting my view in a different way: if you were to model C elegans nervous system, what methods would you suggest? We’d pretty much do exactly what the Blue Brain people are doing.

    Taking a tiny piece of the mouse brain is also taking a model systems approach. Except instead of taking a simple system and modelling the whole thing, you take a tiny part of a really big complex system and model that. This is also a valid way to reduce the complexity of studying a brain. It’s how we learned about center-surround receptive fields, retinal ganglion cells, etc..

    Now, there are good reasons to *supplement* such approaches. We don’t stop doing psychology or single-channel biophysics. Ultimately, all these approaches will be complementary aspects of the overarching story about the mouse’s brain.

  15. In re-reading my bit about C-elegans, I should clarify (I really wish I could just edit :)): I am not saying their method is the *only* method. I am only saying that the criticisms I have seen are not compelling, certainly no good arguments to jettison the Blue Brain project.

    Rather, my take is that their approach *needs* to be tried, as it is the natural and obvious culmination of about 60 years of *extremely* solid experimental and computational neuroscience that started with Hodgkin and Huxely in 1952. A priori rejections of the approach some off as ill-founded and unaware of this (and, more commonly, they come off as politically founded, though not in your case!).

    It would be a big mistake to stop funding this initiative, a big step back for neuroscience, on the order of pulling funding for the superconducting supercollider in physics in 1993.

    Note I have no interest in this myself, I realize it is coming off that way. 🙂 Indeed, the lab I work in the PI thinks the whole project is doomed to fail because brains are too complex to model using digital computers.

  16. If you were to model… what methods would you suggest?

    I’d start with a clear hypothesis about what the C. Elegans nervous system does, and how; and I’d look for my simulation to do that, by that method.

    Eric, I’d feel better about your saying my points miss if it were clearer to me you’ve actually assimilated them. But it seems clear we’re not getting any further on this at the moment.

  17. I’d start with a clear hypothesis about what the C. Elegans nervous system does, and how; and I’d look for my simulation to do that, by that method.

    I agree it is nice to have a functional overlay to help guide and interpret our models. But it isn’t necessary. My argument is that if we had a detailed biophysical model that “merely” reproduced the observed spatiotemporal dynamics of the C elegans nervous system, that would be a (very) successful and useful model.

    The two approaches are complementary. At one extreme, imagine if we had a model of C elegans that actually reproduced the neuronal dynamics correctly, to every possible experimental perturbation. That would help us think about C elegans in functional terms. You can start asking what happens when you ablate various structures in the model, etc..

    In practice, it’s not a competition between the perspectives, but a coevolution.

    You are right that their first paper is obsessed with one side of the coin. But this is actually correcting for a pretty big deficit: there is no shortage of neurophilosophers and such to speculate in functional terms. It’s what happens in every discussion section of pretty much every paper in neuroscience. Few people have the skill, wherewithall, and resources, to attack the quantitative nitty-gritty details required to actually model a column of neocortex. These guys are pretty much the first group to try at this level of detail.

    Also, perhaps the lack of a good functional story about the cortex is actually a consequence of the lack of a detailed mechanistic/dynamical story at the scales the Blue Brain folk are aiming. That is, it could be that we will not even get to where you want to go until we have more of the low-level dynamical details to help fuel conceptual innovation and speculation in functional terms. A priori, it is impossible to tell.

Leave a Reply

Your email address will not be published. Required fields are marked *