Brain on a chip

Following on from preceding discussion, Doru kindly provided this very interesting link to information about a new chip designed at MIT which is designed to mimic the function of real neurons.

I hadn’t realised how much was going on, but it seems MIT is by no means alone in wanting to create such a chip. In the previous post I mentioned Dharmendra Modha’s somewhat controversial simulations of mammal brains: under his project leadership IBM, with DARPA participation, is now also working on a chip that simulates neuronal interaction. But while MIT and IBM slug it out those pesky Europeans had already produced a neural chip as part of the FACETS project back in 2009. Or had they? FACETS is now closed and its work continues within the BrainScaleS project working closely with Henry Markram’s Blue Brain project at EPFL, in which IBM, unless I’m getting confused by now, is also involved. Stanford, and no doubt others I’ve missed, are involved in the same kind of research.

So it seems that a lot of people think a neuron-simulating chip is a promising line to follow; if I were cynical I would also glean from the publicity that producing one that actually does useful stuff is not as easy as producing a design or a prototype; nevertheless it seems clear that this is an idea with legs.

What are these chips actually meant to do? There is a spectrum here from the pure simulation of what real brains really do to a loose importation of a functional idea which might be useful in computation regardless of biological realism. One obstacle for chip designers is that not all neurons are the same. If you are at the realist end of the spectrum, this is a serious issue but not necessarily an insoluble one. If we had to simulate the specific details of every single neuron in a brain the task would become insanely large: but it is probable that neurons are to some degree standardised. Categorising them is, so far as I know, a task which has not been completed for any complex brain: for Caenorhabditis elegans, the only organism whose connectome is fully known, it turned out that the number of categories was only slightly lower than the number of neurons, once allowance was made for bilateral symmetry; but that probably just reflects the very small number of neurons possessed by Caenorhabditis (about 300) and it is highly likely that in a human brain the ratio  would be much more favourable. We might not have to simulate more than a few hundred different kinds of standard neuron to get a pretty good working approximation of the real thing.

But of course we don’t necessarily care that much about biological realism. Simulating all the different types of neurons might be a task like simulating real feathers, with the minute intricate barbicel latching structures – still unreplicated by human technology so far as I know – which make them such sophisticated air controllers, whereas to achieve flight it turns out we don’t need to consider any structure below the level of wing. It may well be that one kind of simulated neuron will be more than enough for many revolutionary projects, and perhaps even for some form of consciousness.

It’s very interesting to see that the MIT chip is described as working in a non-digital, analog way (Does anyone now remember the era when no-one knew whether digital or analog computers were going to be the wave of the future?). Stanford’s Neurogrid project is also said to use analog methods, while BrainScaleS speaks of non-Von Neumann approaches, which could refer to localised data storage or to parallelism but often just means ‘unconventional’. This all sounds like a tacit concession to those who have argued that the human mind was in some important respects non-computational: Penrose for mathematical insight, Searle for subjective experience, to name but two. My guess is that Penrose would be open-minded about the capacities of a non-computational neuron chip, but that Searle would probably say it was still the wrong kind of stuff to support consciousness.

In one respect the emergence of chips that mimic neurons is highly encouraging: it represents a nearly-complete bridge between neurology at one end and AI at the other. In both fields people have spoken of ‘connectionism’ in slightly different senses, but now there is a real prospect of the two converging. This is remarkable – I can’t think of another case where two different fields have tunnelled towards each other and met so neatly – and in its way seems to be a significant step towards the reunification of the physical and the mental. But let’s wait and see if the chips live up to the promise.

17 thoughts on “Brain on a chip

  1. Peter, nice “liason” between posts.

    Maybe I am old fashioned, I don’t know. But it was like this: You have a phenomenon… you produce a theory that tries to explain it, based on current accepted laws of nature, fine, if necessary you use simulations to help to quantify or understand better the effects of certain parameters of your model, when equations were very difficult to solve. Actually simulation is just that, solving equations. If the thing was still too tough, you could try with emulation, finding another physical system with comparable behaviour(quite seldom). Would you call it non-computational? what is to “mimic”, to produce the same output, to cause the same effect on another system?

    But what is this? they haven’t got anything new to say (theory wise), so let’s put 10 million euros down the drain. Even more, FP7 is effort oriented, so even if the whole thing becomes a flop, they’ll get paid. Where is the theoretical basis? Well, I’ve seen worse uses of tax-payers money, probably this one could eventually produce some interesting results, and there’ll be educational benefits for young researchers. As you said let’s see if the chips live up to the promise, btw, I still don’t know what is the promise, I would say: lack of clarity in the objectives presentation.

    Please, could we have a theory before going to the lab?

  2. Peter, thank you for following up with this. As an experienced electrical engineer, what really got my attention was the resolution over the longstanding debate over the occurrence of a Long-Term Depression (LTD) in relation with the Long-Term Potentiation (LPT) occurrence in the postsynaptic cell. In my opinion, this has a fundamental implication in understanding processes in the brain. The problem is not really understanding how neurons are connected to each other (the only importance is that we have a lot of them and are all inherited from genetics) but rather how environment stimulus leaves impressions on these connections by strengthening or weakening them and creating cognitive memories.
    Vicente, top/down theo-ries may have some catching-up to do in this field, but ground-up implementation (with its inherited empiricism) have a lot more to catch-up and is very welcomed.
    Very promising is also the development of memristors (wrote a little bit about it few years ago): http://www.doru360.com/doru/?p=323

  3. I have no problem in seeing how the cognitive functions of pattern learning, memory, and recognition might be realized on a chip. My bet is that the most effective architectures will look like synaptic matrices (see Ch. 3 in *The Cognitive Brain*). The most difficult part will be getting proper synaptic transfer weights that are normalized for a wide range of axonal inputs on the post-synaptic cells.

    Building a retinoid system that represents the volumetric space in which a part of the device exists as a fixed perspectival “point” of origin — building subjectivity — will be even more difficult, and perhaps not achievable outside of natural biological creatures.

  4. Arnold,
    Is any predetermined/initial knowledge/cognition built into the synaptic matrices of the retinoid system? In other words, if a hypothetical generation of humans would be born on Mars without any knowledge about Earth, are they one day realize that something feels wrong with the sky, it should be blue not red???

  5. Arnold,

    there is no knowledge built into the the egocentric space of the retinoid system

    Well, not in humans (babies know right from the beginning “where” to suck, but I suppose that can be explained in terms of touch reflexes). What about other species? many species start walking and navigating around, right from the start, would they need some spatial built-in knowledge and functions? some of them very complex, like chicken that follow the hen.

    Doru, in addition to welcome it, would you put your own money in it? It is very lazy from us to appeal to ground up approaches so that we can save some thinking and go ahead. Empiricism is usually related to very practical purposes, when we want/need a solution irrespective of the theory that might explain it. In this case, I thought we wanted to know more than any other thing. Having said this, I don’t believe at all that the project is a waste. If it is a sort of breakthrough strategy to overcome the idealogical impasse we seem to be in, then fine.

    Mind you!! a different issue is to carry out experiments on the real system, that is observing the phenomenon. J.C. Mawwell needed all the preceding empirical work to write this set of field equations, but Ampere, Faraday, Coulomb, Oersted,etc, were not simulating, they were observing the real thing. As I have said before, I rather spend those budgets for observing real brains, that seems to me the correct empiric way.

  6. Vicente: “What about other species? many species start walking and navigating around, right from the start, would they need some spatial built-in knowledge and functions? some of them very complex, like chicken that follow the hen.”

    Yes, other creatures navigate right after birth. But it is not because *particular* knowledge about their environment is built into their retinoid space. The retinoid system simply gives them a perspectival representation of the space around them, within which their pre-conscious sensory systems can project the ever-changing objects and events (the content of their phenomenal world) that they have to respond to in an adaptive fashion. Their sensory-motor routines/knowledge might be built in at birth, but these are in their synaptic matrices and motor circuits, not in the innate “hard-wired”
    Z-plane structure of retinoid space. It is only *after* the changing outputs of these pre-conscious mechanisms are projected into the creature’s subjective (retinoid) world space that they become part of the creature’s consciousness.

  7. Peter

    Is there a “disunion” of the physical and the mental that can be overcome by running programs on new architecture computer chips that we can assume have no more correlation to physical brain states than any other kind of chip ?

    JBD

  8. I came across this recent paper:

    http://cognition.ups-tlse.fr/annonces/alauneEn.html

    Amazing. Despite I don’t completely share the concept of “abstract knowledge” that the research team uses, the results are very interesting. Maybe we are too much concern about complexity. Maybe the secret is in another feature, and simple neural layouts can show conscious traits.

  9. Vicente, in my theoretical model synaptic matrices are the mechanisms for conceptual learning and can do the cognitive job that the honeybee does. See, for example, p.46-47 in *The Cognitive Brain*. But the coding and recognition of relationships between distinctive spatial patterns can be performed without subjectivity/consciousness and is not necessarily a conscious trait. I haven’t yet seen the published paper. The details might convince me that honeybees must have a fixed perspectival representation of the volumetric space in which they live — a tiny retinoid space (?).

  10. Arnold, yes, appropiate nuance. Have you considered what is the minimum size (number of neurons) for a retinoid space to work, as a design requirement. Seems to me an important parameter (figure of merit), for system characterisation.

    Then, if there is no conscious background, any reference to “abstract information” makes no sense to me (to me). In this particular case, the abstraction is based on the relationship set up , beyond the geographical locations, and not directly derived (derivable?) from them, unless there is a goal to serve (function) and an abstraction mechanism (reasoning) operating in place. But if there is no sentient being to experience it, if it is just the result of neural subnetworks association, then, there is no more abstraction that the one involved in the polen digestion.

  11. Vicente: “Have you considered what is the minimum size (number of neurons) for a retinoid space to work, as a design requirement.”

    Interesting question. As I see it, the theoretical minimal design requirement is three Z-planes, with each plane having 9 autaptic cells together with 120 shifting interneurons — a total of 147 neurons, with no redundancy. However, this minimal retinoid space can only represent the egocentric pattern of simple excitatory events in the world, objects could have no qualia and could not be distinguished from each other when projected to synaptic matrices.

  12. There is a stream of information which is continually transferred between the inextricable connections of the brain and body, which have been formed by evolution and made apparent through growth changes and ageing. Not only do we inherit physical features of past generations, but of course elements of their traits and ways which we experience through life. It is difficult to seperate,especially with humans, which is nature and what is nurture, but with the salmon, for example, how does it know let alone why it needs to return to where it was born to lay its eggs, and then it dies, just as previous generations have done before? Unless it has some inbuilt knowledge. There seems to be no nurture from parents, maybe from its environment? Which seems difficult to understand. I think a human born and brought up on Mars with no nurture knowledge of Earth (which would be difficult to imagine)may not think the red sky was necessarily wrong but would feel,I think, as it grew and aged, that things were not quite right.

  13. Richard,

    Yes, I think there is a lot of truth in what you say.

    In that sense, one of the main differences between the salmons and us, is our incredible capacity for adaptation, as a result of our intellingence. That is not given for free, adaptation is very expensive, in psychological currency mostly.

    Even more, we are beginning to discover, that our primary instincts and traits are not helping any more, rather they can pose a big obstacle for the real progress we have to make now.

    It is definitely time to stop being humans, to become something better, may the lottery of genes help a bit.

  14. Vincente,

    “In that sense, one of the main differences between the salmons and us, is our incredible capacity for adaptation, as a result of our intelligence.”

    I think you have it the wrong way round. I think our intelligence is a result of (maybe a behavioural expression of) our capacity for adaptation, which in turn is due to a larger brain that has a greater number of degrees of freedom in which variation can produce viable results. With 300 neurons C. elegans only has so much flexibility available to it.

  15. Ron,

    I believe your claim would have been more acceptable, when we were just constraint by a natural environment and simple social groups, stage we left behing long time ago. Maybe, discovering the use of tools by chance could have fostered other dimensions of intelligence.

    I agree that, probably, the first step is to be equipped with the necessary neuronal machinery, not just linked to size, more likely to be related to architecture, or both. This first step could have perfectly been a mutation, the question I believe still unveiled is: what did really this mutation change, or create?

    I think that in order to make your statement meaningful, you have to specifically refer to: “non anatomical or physiological trait evolutive adapation”, because for other mechanisms of adaptation your statement is evident, “adaptation is a behavioural expression of intelligence”, what else? so intelligence is required in the first place.

Leave a Reply

Your email address will not be published. Required fields are marked *