Brain on a chip

Following on from preceding discussion, Doru kindly provided this very interesting link to information about a new chip designed at MIT which is designed to mimic the function of real neurons.

I hadn’t realised how much was going on, but it seems MIT is by no means alone in wanting to create such a chip. In the previous post I mentioned Dharmendra Modha’s somewhat controversial simulations of mammal brains: under his project leadership IBM, with DARPA participation, is now also working on a chip that simulates neuronal interaction. But while MIT and IBM slug it out those pesky Europeans had already produced a neural chip as part of the FACETS project back in 2009. Or had they? FACETS is now closed and its work continues within the BrainScaleS project working closely with Henry Markram’s Blue Brain project at EPFL, in which IBM, unless I’m getting confused by now, is also involved. Stanford, and no doubt others I’ve missed, are involved in the same kind of research.

So it seems that a lot of people think a neuron-simulating chip is a promising line to follow; if I were cynical I would also glean from the publicity that producing one that actually does useful stuff is not as easy as producing a design or a prototype; nevertheless it seems clear that this is an idea with legs.

What are these chips actually meant to do? There is a spectrum here from the pure simulation of what real brains really do to a loose importation of a functional idea which might be useful in computation regardless of biological realism. One obstacle for chip designers is that not all neurons are the same. If you are at the realist end of the spectrum, this is a serious issue but not necessarily an insoluble one. If we had to simulate the specific details of every single neuron in a brain the task would become insanely large: but it is probable that neurons are to some degree standardised. Categorising them is, so far as I know, a task which has not been completed for any complex brain: for Caenorhabditis elegans, the only organism whose connectome is fully known, it turned out that the number of categories was only slightly lower than the number of neurons, once allowance was made for bilateral symmetry; but that probably just reflects the very small number of neurons possessed by Caenorhabditis (about 300) and it is highly likely that in a human brain the ratio  would be much more favourable. We might not have to simulate more than a few hundred different kinds of standard neuron to get a pretty good working approximation of the real thing.

But of course we don’t necessarily care that much about biological realism. Simulating all the different types of neurons might be a task like simulating real feathers, with the minute intricate barbicel latching structures – still unreplicated by human technology so far as I know – which make them such sophisticated air controllers, whereas to achieve flight it turns out we don’t need to consider any structure below the level of wing. It may well be that one kind of simulated neuron will be more than enough for many revolutionary projects, and perhaps even for some form of consciousness.

It’s very interesting to see that the MIT chip is described as working in a non-digital, analog way (Does anyone now remember the era when no-one knew whether digital or analog computers were going to be the wave of the future?). Stanford’s Neurogrid project is also said to use analog methods, while BrainScaleS speaks of non-Von Neumann approaches, which could refer to localised data storage or to parallelism but often just means ‘unconventional’. This all sounds like a tacit concession to those who have argued that the human mind was in some important respects non-computational: Penrose for mathematical insight, Searle for subjective experience, to name but two. My guess is that Penrose would be open-minded about the capacities of a non-computational neuron chip, but that Searle would probably say it was still the wrong kind of stuff to support consciousness.

In one respect the emergence of chips that mimic neurons is highly encouraging: it represents a nearly-complete bridge between neurology at one end and AI at the other. In both fields people have spoken of ‘connectionism’ in slightly different senses, but now there is a real prospect of the two converging. This is remarkable – I can’t think of another case where two different fields have tunnelled towards each other and met so neatly – and in its way seems to be a significant step towards the reunification of the physical and the mental. But let’s wait and see if the chips live up to the promise.

Connectome

Are connectomes the future?  Although the derivation of the word “connectome” makes no sense – as I understand it the “-ome” bit is copied from “genome”, which in turn was copied from “chromosome”, losing a crucial ‘s’ in the process* – it was coined simultaneously but separately by Olaf Sporns and Patric Hagmann, so it is clearly a word whose time to emerge has come.

It means a functionally coherent set of neural connections, or a map of the same. This may be the entire set of connections in a brain or a nervous system, but it may also be a smaller set which link and work together.  There is quite a lot going on in this respect: the Human Connectome Project is preparing to move into its second, data-gathering phase; there’s also the (more modest or perhaps more realistic) Mouse Connectome Project.  One complete connectome, that for the worm Caenorhabditis elegans, already exists (in fact I think it existed before the word “connectome”) and is often mentioned. The Open Connectome Project has a great deal of information about this and much besides.

The idea of the connectome was given a new twist by Sebastian Seung in his TED talk “I Am My Connectome”, and he has now published a book called (guess what) Connectome. In that he gently and thoughtfully backs away a bit from the unqualified claim that personal identity is situated in the connectome of the whole brain. It’s a useful book which falls into three parts:  a lucid exposition of the neural structure of the brain, some discussion and proposals on connectomic investigation; and some more fanciful speculation, examined seriously but without losing touch with common sense. Seung touches briefly on the spat between Henry Markram and Dharmendra Modha: Markram’s Blue Brain project, you may recall, aims to simulate an entire brain, and he was infuriated by Modha’s claim to have simulated a cat brain on the basis of a far less detailed approach (Markram’s project seeks to model the complex behaviour of real neurons: Modha’s treated them as standard nodes). Seung is quite supportive of these simulations, but I thought his discussion of the very large difficulties involved and the simplifications inherent even in Markram’s scrupulous approach was implicitly devastating.

What should we make of all this connectome stuff? In practical terms the emergence of the term “connectome” adds nothing much to our conceptual armoury: we could and did talk about neural networks anyway. It’s more that it represents a new surge of confidence that neurological approaches can shoulder aside the psychologists, the programmers, and the philosophers and finally get the study of the human mind moving forward on a scientific basis. To a large extent this confidence springs from technical advances which mean it has finally begun to seem reasonable to talk about drawing up a detailed wiring diagram of sets of neurons.

Curiously though, the term also betrays an unexpected lack of confidence. The deliberate choice of a word which resembles one from genetics and recalls the Human Genome Project clearly indicates an envy of that successful field and a desire to emulate it. This is not the way thrusting, successful fields of study behave; the connectonauts seem to be embarking on their explorations without having shed that slightly resentful feeling of being the junior cousin. Perhaps it’s just that like most of us they are slightly frightened of Richard Dawkins. However, it could also be a well-founded sense that they are getting into something which is likely to turn out complicated in ways that no-one could have foreseen.

One potential source of difficulty lies in the fact that looking for connectomes tends to imply a commitment to modularity.  The modularity (or otherwise) of mind has been extensively discussed by philosophers and psychologists, and neurologists have come up with pretty strong evidence that localisation of many functions is a salient feature of the brain: but there is a risk that the modules devised by evolution don’t match the ones we expect to find, and hence are difficult to recognise or interpret; and worse, it’s quite possible that important functions are not modularised at all, but carried out by heterogeneous and variable sets of neurons distributed over a wide area. If so, looking for coherent connectomes might be a bad way of approaching the brain.

In this respect we may be prey to misconceiving the brain through thinking of it as though it were an artefact. Human-designed machines need to have an intelligible structure so that they can be constructed and repaired easily; and for complex systems modularisation is best practice. A complex machine is put together out of replaceable sub-systems that perform discrete tasks; good code is structured to maximise reusability and intelligibility.  But Nature doesn’t have to work like that: evolution might find tangled systems that work fine and actually generate lower overheads.

That might be so, but when we look at animal biology the modularisation is actually pretty striking: the internal organs of a human being, say, are structured in a way that bears a definite resemblance to the components of a machine. Evolution never had to take account of the possibility of replacement parts, but (immune system aside) in fact our internal organisation facilitates transplant surgery much more than it need have done.

Why is that? I’d suggest that there is a secondary principle of evolution at work. When evolution is (so to speak) devising a creature for a new ecological niche, it doesn’t actually start from scratch: it modifies one of the organisms already to hand. Just as a designer finds it easier to build a new machine out of existing parts, a well-modularised creature is more likely to give rise to variant descendants that work in new roles. So besides fitness to survive, we have fitness to give rise to new descendant species; and modularisation enhances that second-order kind of fitness.  Lots of weird creatures that worked well back in the Cambrian did not lend themselves easily to redesign, and hence have no living descendant species, whereas some creature with a backbone, four limbs with five digits each and a tail, proved to be a fertile source of useable variation: leave out some digits, a pair of limbs or the tail; put big legs on the back and small ones on the front, and straightaway you’ve got a viable new modus operandi. In the same way a creature that bolted on an appendix to its gut might be more ready to produce descendants without the appendix function than one which had reconditioned the function of its whole system (I’m getting well out of my depth here). In short, maybe there is an evolutionary tendency to modularisation after all, so it is reasonable to look for connectomes.  As a further argument, we may note that it would seem to make sense in general for neurons that interact a lot to be close together, forming natural connectomes, though given the promiscuous connectivity of the brain some caution about that may be appropriate.

Anyway, what we care about here is consciousness, so the question for us must be: is there a consciousness connectome? In one sense, of course, there must be (and here we stub our toe on another potential danger of the connectome approach): if we just go on listing all the neurons that play a part in consciousness we will at some point have a full set.  But that might end up being the entire brain: what we want to know is whether there is a coherent self-contained module or set of modules supporting consciousness. Things we might be looking out for would surely include a Global Workspace connectome, and I think perhaps a Higher Order Thought Connectome: either might be relatively clearly identifiable on the basis of their pattern of connections.

I don’t think we’re in any position to say yet, but as a speculation I would guess that in fact there is a set of connectomes that have to act together to support a set  of interlocking functions making up consciousness:  sub-conscious thought,  awareness/input, conscious reflection, emotional tone, and so on. I’m not suggesting by any means that that is the actual list; rather I think it is likely that connectome research might cause us to rethink our categories just as research is already causing us to stop thinking that memory (as we had always supposed)  is a single function.

There is already some sign that connectomes might carve up the brain in ways that don’t match our existing ways of thinking about it:  Martijn van den Heuvel and Olaf Sporns have published a paper which seems to show that there are twelve sites of special interest where interconnections are especially dense: they call this a “rich club”, but I think the functional implications of these twelve special zones remain tantalisingly obscure for the moment.

In the end my guess is that by about 2040 we shall look back on the connectome as a paradigm that turned out to be inadequate to the full complexity of the brain, but one which inspired research essential to a much improved understanding.

*I do realise BTW that words are under no obligation to mean what the Latin or Greek they were derived from suggests – “chromosome” would mean “colour body” which is a trifle opaque to say the least.