Posts tagged ‘Markram’

brainsimAeon Magazine has published my Opinion piece on brain simulation. Go on over there and comment! Why not like me while you’re at it!!!

I’m sorry about that outburst – I got a little over-excited…

Coming soon (here) Babbage’s forgotten rival…

Henry MarkramThe brain is not a gland. But Henry Markram seems to be in danger of simulating one – a kind of electric gland.

What am I on about? The Blue Brain Project has published details of its most ambitious simulation yet; a computer model of a tiny sliver of rat brain. That may not sound exciting on the face of it, but the level of detail is unprecedented…

The reconstruction uses cellular and synaptic organizing principles to algorithmically reconstruct detailed anatomy and physiology from sparse experimental data. An objective anatomical method defines a neocortical volume of 0.29 ± 0.01 mm3 containing ~31,000 neurons, and patch-clamp studies identify 55 layer-specific morphological and 207 morpho-electrical neuron subtypes. When digitally reconstructed neurons are positioned in the volume and synapse formation is restricted to biological bouton densities and numbers of synapses per connection, their overlapping arbors form ~8 million connections with ~37 million synapses.

The results are good. Without parameter tuning – that is, without artificial adjustments to make it work the way it should work – the digital simulation produces patterns of electrical activity that resemble those of real slivers of rat brain. The paper is accessible here. It seems a significant achievement and certainly attracted a lot of generally positive attention – but there are some significant problems. The first is that the methodological issues which were always evident remain unresolved. The second is certain major weaknesses in the simulation itself. The third is that as a result of these weaknesses the simulation implicitly commits Markram to some odd claims, ones he probably doesn’t want to make.

First, the methodology. The simulation is claimed as a success, but how do we know? If we’re simulating a heart, then it’s fairly clear it needs to pump blood. If we’re simulating a gland, it needs to secrete certain substances. The brain? It’s a little more complicated. Markram seems implicitly to take the view that the role of brain tissue is to generate certain kinds of electrical activity; not particular functional outputs, just generic activity of certain kinds.

One danger with that is a kind of circularity. Markram decides the brain works a certain way, he builds a simulation that works like that, and triumphantly shows us that his simulation does indeed work that way. Vindication! It could be that he is simply ignoring the most important things about neural tissue, things that he ought to be discovering. Instead he might just be cycling round in confirmation of his own initial expectations. One of the big dangers of the Blue Brain project is that it might entrench existing prejudices about how the brain works and stop a whole generation from thinking more imaginatively about new ideas.

The Blue simulation produces certain patterns of electrical activity that look like those of real rat brain tissue – but only in general terms. Are the actual details important? After all, a string of random characters with certain formal constraints looks just like meaningful text, or valid code, or useful stretches of DNA, but is in fact useless gibberish. Putting in constraints which structure the random text a little and provide a degree of realism is a relatively minor task; getting output that’s meaningful is the challenge. It looks awfully likely that the Blue simulation has done the former rather than the latter, and to be brutal that’s not very interesting. At worst it could be like simulating an automobile whose engine noise is beautifully realistic but never moves. We might well think that the project is falling into the trap I mentioned last time: mistaking information about the brain for the information in the brain.

Now it could be that actually the simulation is working better than that; perhaps it isn’t as generic as it seems, perhaps this particular bit of rat brain works somewhat generically anyway; or perhaps somehow in situ the tissue trains or conditions itself, saving the project most of the really difficult work. The final answer to the objections above might be if the simulation could be plugged back into a living rat brain and the rat behaviour shown to continue properly. If we could do that it would sidestep the difficult questions about how the brain operates; if the rat behaves normally, then even though we still don’t know why, we know we’re doing it right. In practice it doesn’t seem very likely that that would work, however: the brain is surely about producing specific control outputs, not about glandularly secreting generic electrical activity.

A second set of issues relates to the limitations of the simulation. Several of the significant factors I mentioned above have been left out; notably there are no astrocytes and no neurotransmitters. The latter is a particularly surprising omission because Markram himself has in the past done significant work on trying to clarify this area in the past. The fact that the project has chosen to showcase a simulation without them must give rise to a suspicion that its ambitions are being curtailed by the daunting reality; that there might even be a dawning realisation internally that what has been bitten off here really is far beyond chewing and the need to deliver has to trump realism. That would be a worrying straw in the wond so far as the project’s future is concerned.

In addition, while the simulation reproduces a large number of different types of neuron, the actual wiring has been determined by an algorithm. A lot depends on this: if the algorithm generates useful and meaningful connections then it is itself a master-work beside which the actual simulation is trivial. If not, then we’re back with the question of whether generic kinds of connection are really good enough. They may produce convincing generic activity, and maybe that’s even good enough for certain areas of rat brain, but we can be pretty sure it isn’t good enough for brain activity generally.

Harking back for a moment to methodology, there’s still something odd in any case about trying to simulate a process you don’t understand. Any simulation reproduces certain features of the original and leaves others out. The choice is normally determined by a thesis about how and why the thing works: that thesis allows you to say which features are functionally necessary and which are irrelevant. Your simulation only models the essential features and its success therefore confirms your view about what really matters and how it all operates. Markram, though, is not starting with an explicit thesis. One consequence is that it is hard to tell whether his simulation is a success or not, because he didn’t tell us clearly enough in advance what it was he was trying to make happen. What we can do is read off the implicit assumptions that the project cannot help embodying in its simulation. To hold up the simulation as a success is to make the implicit claim that the brain is basically an electrical network machine whose essential function is to generate certain general types of neural activity. It implicitly affirms that the features left out of the simulation – notably the vast array and complex role of neural transmitters and receptors – are essentially irrelevant. That is a remarkable claim, quite unlikely, and I don’t think it’s one Markram really wants to make. But if he doesn’t, consistency obliges him to downplay the current simulation rather than foreground it.

To be fair, the simulation is not exactly being held up as a success in those terms. Markram describes it as a first draft. That’s fair enough up to a point (except that you don’t publish first drafts), but if our first step towards a machine that wrote novels was one that generated the Library of Babel (every possible combination of alphabetic characters plus spaces) we might doubt whether we were going in the right direction. The Blue Brain project in some ways embodies technological impatience; let’s get on and build it and worry about the theory later. The charge is that as a result the project is spending its time simulating the wrong features and distracting attention from the more difficult task of getting a real theoretical understanding; that it is making an electric gland instead of a brain.

platformsThe European Human Brain Project seems to be running into problems. This Guardian report notes that an open letter of protest has been published by 170 unhappy neuroscientists. They are seeking to influence and extend a review that is due, hoping they can get a change of direction. I don’t know a great deal about the relevant EU bureaucracy, but I should think the letter-writers’ chances of success are small, not least because in Henry Markram they’re up against a project leader who is determined, resourceful, and not lacking support of his own. There’s a response to the letter here.

It is a little hard to work out exactly what the disagreement is about; the Guardian seems to smoosh together the current objections of former insiders with the criticisms of those who thought the project was radically premature in the first place. I find myself trying to work out what the protestors want, from Markram’s disparaging remarks about them, rather the way we have to reconstruct some ancient heresies from the rebuttals of the authorities, the only place where details survive.

We’re told the disagreement is between those who study behaviour at a high level and the project leaders who want to build simulations from the bottom up. In particular some cognitive neuroscience projects have been ‘demoted’ to partner status. People say the project has been turned into a technology one: Markram says it always was:  he suggests that piling up more data is useless and that instead he’s doing an ICT project which will provide a platform for integrating the data, and that it’s all coming out of an ICT budget anyway.

Us naive outsiders had picked up the impression that the project had a single clear goal; a working simulation of a whole human brain. That is sort of still there, but reading the response it seems to be a pretty distant aspiration. Apparently a mouse brain is going to be done first, but even that is a way off; it’s all about the platforms. Earlier documents suggest there will actually be six platforms, only one of which is about brain simulation; the others are neuroinformatics, high performance computing, medical informatics, neuromorphic computing, and neurorobotics – fascinating subjects. The implicit suggestion is that this kind of science can’t be done properly just by working in labs and publishing papers, it requires advanced platforms in which research can be integrated. Really? Speaking as a professional bureaucrat myself, I have to say frankly that that sounds uncommonly like the high-grade bollocks emitted by a project leader who has more money than he knows what to do with. The EU in particular is all about establishing unwanted frameworks and common platforms which lie dead in drawers forever after. If people want to share findings, publishing papers is fine (alright, not flawless). If it’s about doing actual research, having all the projects captured by a common platform which might embody common errors and common weaknesses doesn’t sound like a good idea at all. My brain doesn’t know, but my gut says the platforms won’t be much use.

Let’s be honest, I don’t really know what’s going on, but if one were cynical one might suppose that the success of the Human Genome Project made the authorities open to other grand projects, and one on the brain hit the spot. The problem is that we knew what a map of the genome would be like, and we pretty much knew it could be done and how. We don’t have a similarly clear idea relating to the brain. However, the concept was appealing enough to attract a big pot of money, both in the EU and then in the US (an even bigger pot). The people who got control of these pots cannot deliver anything like the map of the human genome, but they can buy in the support of fund-hungry researchers by disbursing some of the gold while keeping the politicians and bureaucrats happy by wrapping everything in the afore-mentioned bollocks. The authors of the protest letter perhaps ought to be criticising the whole idea, but really they’re just upset about being left out. The deeper sceptics who always said the project was premature – though they may have thought they were talking about brain simulation, not a set of integrative platforms – were probably right; but there’s no money in that.

Grand projects like this are probably rarely the best way to control research funding, but they do get funding. Maybe something good somewhere will accidentally get the help it needs; meanwhile we’ll be getting some really great European platforms.

Following on from preceding discussion, Doru kindly provided this very interesting link to information about a new chip designed at MIT which is designed to mimic the function of real neurons.

I hadn’t realised how much was going on, but it seems MIT is by no means alone in wanting to create such a chip. In the previous post I mentioned Dharmendra Modha’s somewhat controversial simulations of mammal brains: under his project leadership IBM, with DARPA participation, is now also working on a chip that simulates neuronal interaction. But while MIT and IBM slug it out those pesky Europeans had already produced a neural chip as part of the FACETS project back in 2009. Or had they? FACETS is now closed and its work continues within the BrainScaleS project working closely with Henry Markram’s Blue Brain project at EPFL, in which IBM, unless I’m getting confused by now, is also involved. Stanford, and no doubt others I’ve missed, are involved in the same kind of research.

So it seems that a lot of people think a neuron-simulating chip is a promising line to follow; if I were cynical I would also glean from the publicity that producing one that actually does useful stuff is not as easy as producing a design or a prototype; nevertheless it seems clear that this is an idea with legs.

What are these chips actually meant to do? There is a spectrum here from the pure simulation of what real brains really do to a loose importation of a functional idea which might be useful in computation regardless of biological realism. One obstacle for chip designers is that not all neurons are the same. If you are at the realist end of the spectrum, this is a serious issue but not necessarily an insoluble one. If we had to simulate the specific details of every single neuron in a brain the task would become insanely large: but it is probable that neurons are to some degree standardised. Categorising them is, so far as I know, a task which has not been completed for any complex brain: for Caenorhabditis elegans, the only organism whose connectome is fully known, it turned out that the number of categories was only slightly lower than the number of neurons, once allowance was made for bilateral symmetry; but that probably just reflects the very small number of neurons possessed by Caenorhabditis (about 300) and it is highly likely that in a human brain the ratio  would be much more favourable. We might not have to simulate more than a few hundred different kinds of standard neuron to get a pretty good working approximation of the real thing.

But of course we don’t necessarily care that much about biological realism. Simulating all the different types of neurons might be a task like simulating real feathers, with the minute intricate barbicel latching structures – still unreplicated by human technology so far as I know – which make them such sophisticated air controllers, whereas to achieve flight it turns out we don’t need to consider any structure below the level of wing. It may well be that one kind of simulated neuron will be more than enough for many revolutionary projects, and perhaps even for some form of consciousness.

It’s very interesting to see that the MIT chip is described as working in a non-digital, analog way (Does anyone now remember the era when no-one knew whether digital or analog computers were going to be the wave of the future?). Stanford’s Neurogrid project is also said to use analog methods, while BrainScaleS speaks of non-Von Neumann approaches, which could refer to localised data storage or to parallelism but often just means ‘unconventional’. This all sounds like a tacit concession to those who have argued that the human mind was in some important respects non-computational: Penrose for mathematical insight, Searle for subjective experience, to name but two. My guess is that Penrose would be open-minded about the capacities of a non-computational neuron chip, but that Searle would probably say it was still the wrong kind of stuff to support consciousness.

In one respect the emergence of chips that mimic neurons is highly encouraging: it represents a nearly-complete bridge between neurology at one end and AI at the other. In both fields people have spoken of ‘connectionism’ in slightly different senses, but now there is a real prospect of the two converging. This is remarkable – I can’t think of another case where two different fields have tunnelled towards each other and met so neatly – and in its way seems to be a significant step towards the reunification of the physical and the mental. But let’s wait and see if the chips live up to the promise.

Picture: CorticothalamicThis paper on ‘Biology of Consciousness’ embodies a remarkable alliance: authored by Gerald Edelman, Joseph Gally, and Bernard Baars, it brings together Edelman’s Neural Darwinism and Baars’ Global Workspace into a single united framework. In this field we’re used to the idea that for every two authors there are three theories, so when a union occurs between two highly-respected theories there must be something interesting going on.

As the title suggests, the paper aims to take a biologically-based view, and one that deals with primary consciousness. In human beings the presence of language among other factors adds further layers of complexity to consciousness; here we’re dealing with the more basic form which, it is implied, other vertebrates can reasonably be assumed to share at least in some degree. Research suggests that consciousness of this kind is present when certain kinds of connection between thalamus and cortex are active: other parts of the brain can be excised without eradicating consciousness. In fact, we can take slices out of the cortex and thalamus without banishing the phenomenon either: the really crucial part of the brain appears to be the thalamic intralaminar nuclei.  Why them in particular? Their axons radiate out to all areas of the cortex, so it seems highly likely that the crucial element is indeed the connections between thalamus and cortex.

The proposal in a nutshell is that dynamically variable groups of neurons in cortex and thalamus, dispersed but re-entrantly connected constitute a flexible Global Workspace where different inputs can be brought together, and that this is the physical basis of consciousness. Given the extreme diversity and variation of the inputs, the process cannot be effectively ring-mastered by a central control; instead the contents and interactions are determined by a selective process – Edelman’s neural Darwinism (or neural group selection): developmental selection (‘fire together, wire together’), experiential selection, and co-ordination through re-entry.

This all seems to stack up very well  (it seems almost too sensible to be the explanation for anything as strange as consciousness). The authors note that this theory helps explain the unity of consciousness.  It might seem that it would be useful for a vertebrate to be able to pay attention to several different inputs at once, thinking separately about different potential sources of food, for example: but it doesn’t seem to work that way – in practice there seems to be only one subject of attention at once; perhaps that’s because there is only one ‘Dynamic Core’.  This constraint must have compensating advantages, and the authors suggest that these may lie in the ability of a single piece of data to be reflected quickly across a whole raft of different sub-systems. I don’t know whether that is the explanation, but I suspect a good reason for unity has to do with outputs rather than inputs. It might seem useful to deal with more than one input at a time, but having more than one plan of action in response has obvious negative survival value. It seems plausible that part of the value of a Global Workspace would come from its role in filtering down multiple stimuli towards a single coherent set of actions. And indeed, the authors reckon that linked changes in the core could give rise to a coherent flow of discriminations which could account for the ‘stream of consciousness’.  I’m not altogether sure about that – without saying it’s impossible a selective process without central control can give rise to the kind of intelligible flow we experience in our mental processes, I don’t quite see how the trick is done. Darwin’s original brand of evolution, after all, gave rise to speciation, not coherence of development. But no doubt much more could be said about this.

Thus far, we seem on pretty solid ground. The authors note that they haven’t accounted for certain key features of consciousness, in particular subjective experience and the sense of self: they also mention intentionality, or meaningfulness.  These are, as they say, non-trivial matters and I think honour would have been satisfied if the paper concluded there: instead however, the authors gird their loins and give us a quick view of how these problems might in their view be vanquished.

They start out by emphasising the importance of embodiment and the context of the ‘behavioural trinity’ of brain, body, and world. By integrating sensory and motor signal with stored memories, the ‘Dynamic Core’ can, they suggest, generate conceptual content and provide the basis for intentionality. This might be on the right track, but it doesn’t really tell us what concepts are or how intentionality works: it’s really only an indication of the kind of theory of intentionality which, in a full account, might occupy this space.

On subjective experience, or qualia, the authors point out that neural and bodily responses are by their nature private, and that no third-person description is powerful enough to convey the actual experience. They go on to deny that consciousness is causal: it is, they say, the underlying neural events that have causal power.  This seems like a clear endorsement of epiphenomenalism, but I’m not clear how radical they mean to be. One interpretation is that they’re saying consciousness is like the billows: what makes the billows smooth and bright? Well, billows may be things we want to talk about when looking at the surface of the sea, but really if we want to understand them there’s no theory of billows independent of the underlying hydrodynamics. Billows in themselves have no particular explanatory power. On the other hand, we might be talking about the Hepplewhiteness of a table. This particular table may be Hepplewhite, or it may be fake. Its Hepplewhiteness does not affect its ability to hold up cups; all that kind of thing is down to its physical properties. But at a higher level of interpretation Hepplewhiteness may be the thing that caused you to buy it for a decent sum of money.  I’m not clear where on this spectrum the authors are placing consciousness – they seem to be leaning towards the ‘nothing but’ end, but personally I think it’s to hard to reconcile our intuitive sense of agency without Hepplewhite or better.

On the self, the authors suggest that neural signals about one’s own responses and proprioception generate a sense of oneself as a separate entity: but they do not address the question of whether and in what sense we can be said to possess real agency: the tenor of the discussion seems sceptical, but doesn’t really go into great depth. This is a little surprising, because the Global Workspace offers a natural locus in which to repose the self. It would be easy, for example, to develop a compatibilist theory of free will in which free acts were defined as those which stem from processes in the workspace but that option is not explored.

The paper concludes with a call to arms: if all this is right, then the best way to vindicate it might be to develop a conscious artefact: a machine built on this model which displays signs of consciousness – a benchmark might be clear signs of the ability to rotate an image or hold a simulation. The authors acknowledge that there might be technical constraints, but I think they an afford to be optimistic. I believe Henry Markram, of the Blue Brain project, is now pressing for the construction of a supercomputer able to simulate an entire brain in full detail, so the construction of a mere Global Dynamic Core Workspace ought to be within the bounds of possibility – if there are any takers..?

Picture: ephaptic consciousness. The way the brain works is more complex than we thought. That’s a conclusion that several pieces of research over recent years have suggested for one reason or another: but some particularly interesting conclusions are reported in a paper in Nature Neuroscience (Anastassiou, Perin, Markram, and Koch). It has generally been the assumption that neurons are effectively isolated, interacting only at synapses: it was known that they could be influenced by each other’s electric fields, but it was generally thought that given the typically tiny fields involved, these effects could be disregarded. The only known exceptions of any significance were in certain cases where unusually large fields could induce ‘ephaptic coupling ‘ interfering with the normal working of neurons and cause problems.

Given the microscopic sizes involved and the weakness of the fields, measuring the actual influence of ephaptic effects is difficult, but for the series of experiments reported here a method was devised using up to twelve electrodes for a single neuron. It was found that extracellular fluctuation did produce effects within the neuron, at the minuscule level expected: however, although the effects were too small to produce any immediate additional action potentials, induced fluctuations in one neuron did influence neighbouring cells, producing a synchronisation of spike timing. In short, it turns out that neurons can influence each other and synchronise themselves through a mechanism completely independent of synapses.

So what? Well, first this may suggest that we have been missing an important part of the way the brain functions. That has obvious implications for brain simulations, and curiously enough, one of the names on the paper (he helped with the writing) is that of Henry Markram, leader of the most ambitious brain simulation project of all,Blue Brain.  Things seem to have gone quiet on that project since completion of ‘phase one’; I suppose it is awaiting either more funding or the advances in technology which Markram foresaw as the route to a total brain simulation. In the meantime it seems the new research shows that like all simulations to date Blue Brain was built on an incomplete picture, and as it stood was doomed to ultimate failure.

I suppose, in the second place, there may be implications for connectionism. I don’t think neural networks are meant to be precise brain simulations, but the suggestion that a key mechanism has been missing from our understanding of the brain might at least suggest that a new line of research, building in an equivalent mechanism to connectionist systems could yield interesting results.

But third and most remarkable, this must give a big boost to those who have suggested that consciousness resides in the brain’s electrical field: Sue Pockett, for one, but above all JohnJoe McFadden, who back in 2002 declared that the effects of the brain’s endogenous electromagnetic fields deserved more attention. Citing earlier studies which had shown modulation of neuron firing by very weak fields, he concluded:

By whatever mechanism, it is clear that very weak em field fluctuations are capable of modulating neurone-firing patterns. These exogenous fields are weaker than the perturbations in the brain’s endogenous em field that are induced
during normal neuronal activity. The conclusion is inescapable: the brain’s endogenous em field must influence neuronal information processing in the brain.

We may still hold back from agreeing that consciousness is to be identified with an electromagnetic field, but he certainly seems to have been ahead of the game on this.