Brain in a jar. You probably read about the recent experiment which apparently simulated enough neurons for half a mouse brain running for the equivalent of one second of mouse time – though that required ten seconds of real time on a massively powerful supercomputer. Information is scarce: we don’t know how detailed the simulation was or how well the simulated neurons modelled the behaviour of real mouse neurons (it sounds as though the neurons operated in a void, rather than in a simulated version of the complex biological environment which real neurons inhabit, and which plays a significant part in determining their behaviour). No attempt was made to model any of the structure of a mouse brain, though it is said that some signs of patterns of firing were detected, suggesting that some nascent self-organisation might have been under way – on an optimistic interpretation.

I don’t know how this research relates to the Blue Brain project, which has just about reached what the researchers consider a reasonable neuron-by-neuron electronic simulation of one of the cortical columns in the brain of a juvenile rat. The people there emphasise that the project aims to explore the operation of neuronal structures, not to produce a working brain, still less an artifical consciousness, in a computer.

It is not altogether clear, of course, how well an isolated brain – even a whole one – would work (another unanswered question about the simulation is whether it was fed with simulated inputs or left to its own devices). A brain in a jar, dissected out of its body but still functioning, is a favourite thought-experiment of some philosophers; but others denounce the idea that consciousness is to be attributed to the brain, rather than the whole person, as the ‘mereological fallacy’. A new angle on this point of view has been provided by Rolf Pfeifer and Josh Bongard in How the body shapes the way we think. Their title is actually slightly misleading, since they are not concerned primarily with human beings, but seek instead to provide a set of design principles for better robots, drawing on the history of robotics and on their own theoretical insights. They’re not out to solve the mystery of consciousness, either, but their ideas are of some interest in that connection.

In essence, Pfeifer and Bongard suggest that many early robots relied too much on computation and not enough on bodies and limbs with the right kind of properties. They make several related points. One is the simple observation that sometimes putting springs in a robot’s legs works better than attempting to calculate the exact trajectory its feet need to follow to achieve perfect locomotion. In fact, a great deal can be accomplished merely by designing the right kind of body for your robot. Pfeifer and Bongard cite the ‘passive dynamic walkers’ which achieve a human-style bipedal gait without any sensors or computation at all (they even imply, bizarrely, that the three patron saints of Zurich might have worked on a similar principle: apparently legend relates that once their heads were cut off, the holy three walked off to the site of the Grossmünster church). Similarly, the canine robot Puppy produces a dog-like walk from a very simple input motion, so long as its feet are allowed to slip a bit. Human babies are constructed in such a way that even random neural activity is relatively likely to make their hands sweep the area in front of them, grasp an object, and bring it up towards eyes and mouth: so that this exploratory behaviour is inherently likely to arise in babies even if it is not neurally pre-programmed.

Another point is that interesting (and intelligent) behaviour emerges in response to the environment. Simple block-pushing robots, if designed a certain way, will automatically push the blocks in their environment together into groups. This behaviour, which is really just a function of the placement of sensors and the very simple program operating the robots, looks like something achieved by relatively complex computation, but really just emerges in the right environment.

Thins are looking bleak for the brain in the jar, but there is a much bolder hypothesis to come. Pfeifer and Bongard note that a robot such as Puppy may start moving in an irregular, scrambling way, but gradually falls into a regular trot: in fact, for different speeds there may be several different stable patterns: a walk, a run, a trot, a gallop. These represent attractor states of the system, and it has been shown that neural networks are capable of recognising these states. Pfeifer and Bongard suggest that the recognition of attractor states like this represents the earliest emergence of symbolic structures; that cognitive symbol manipulation arises out of recognising what your own body is doing. Here, they suggest, lies the solution to the symbol grounding problem; of intentionality itself.

If that’s true, then a brain in a jar would have serious problems: moreover, simulating a brain in isolation is very likely to be a complete waste of time because without a suitable body and an environment to move around in, symbolic cognition will just never get going.

Is it true? The authors actually offer relatively little in the way of positive evidence or argument: they tell a plausible story about how cognition might arise, but don’t provide any strong reasons to think that it is the only possible story. I’m not altogether sure that their story goes as far as they think, either. Is the ability to respond to one’s body falling into certain attractor states a sign of symbol manipulation, any more than being able to respond to a flash of light or a blow on the knee? I suspect that symbols require a higher levl of abstraction, and the real crux of the story is in how that is achieved – something which looks likely, prima facie, to be internal to the brain.

But I think if I were running a large-scale brain simulation project, these ideas might cause me some concern.


  1. 1. Dan says:

    Hi there.
    I’ve been looking here for detail on Davidson’s ‘Anomalous Monism’ which I’m having problems with. How can the approach say on the one hand that mental states arise from physical brain states in a bottom-up fashion but that there is no translation the other way? Would it not be better just to accept that this is a bridge too far for current science? Or to accept that the problem lies in language for different levels of abstraction (in the same way that describing an animal’s behaviour in ‘neuron speak’ loses its meaning)?

  2. 2. Eric Thomson says:

    The mouse cortex simulation is an amazing leap forward in our ability to run biologically realistic simulations of neural tissue. That it ran in ten seconds is simply astonishing (on your laptop it would have taken orders of magnitude longer).

    The conceptual infrastructure for constructing biologically realistic neural models has been around for over 50 years (the Hodgkin-Huxely model, published in 1952).

    I think the people who focus on the importance of having a body, of the importance of the physics of the body, are right, but such considerations are not in conflict with brain-based explanations. Clearly, without the brain, you won’t process sensory inputs, you won’t move, and that body will be useless. The coupling between the neural and bodily hardware must be understood to explain behavior. Neither can be eliminated. Whether that means neither can be eliminated to explain consciousness is an open question. I tend to think that we could have a conscious brain in a vat (visual hallucinations when you have your eyes closed on LSD, or dreams, suggest this is true).

    The blue brain project, of which their model is a component, is a very exciting development for systems neuroscience, and will likely (and rightly) dominate systems neuroscience for the next few decades.

  3. 3. Peter says:

    Hi Dan. Davidson probably deserves a post of his own – I’ll see what I can do.

    Thanks Eric. There’s no doubt that the simulation is a remarkable achievement, but I wonder whether the blue brain project will be as useful as you suggest (I wish it every success, of course). There seems to me to be a kind of double problem for a project like this – if it produces results which appear to be different from those of actual living tissue, people will say it’s a flawed simulation: if it reproduces the behaviour of real neurons exactly, people will applaud but complain that that doesn’t tell us anything we didn’t know already. Perhaps that is a bit defeatist, though.

    So far as brains in jars are concerned, my own intuition is that you could in principle dissect my brain out now and enable it to continue its mental life in some shape or form: but it’s far more difficult to believe a brain isolated from its body at birth could develop normally. You might be able to produce simulated inputs which would take the place of normal experience – but would that amount to giving the brain a new body, albeit possibly a digital or even a virtual one?

  4. 4. Blue Devil Knight says:

    Peter: I think in either case (accurate versus inaccurate predictions) we have a win. If they are accurate, we keep pushing it, and generate more and more interesting predictions (which we didn’t already know). Also, there are lots of useful mathematical techniquest to use to go from a too-detailed realistic model and whittle it down to something more simple (computationally and conceptually). If the model is inaccurate, then that gives the mathematicians something to worry about. What is nice about the Blue Brain project is that mathematicians and experimentalists are working very closely together in the same lab. It’s just great work. Another nice thing is that they are using Hodgkin-Huxely conductance-based models, which are the gold standard: unlike connectionist psuedoneuronal models, there is little doubt that HH type models are sufficient to capture the electrical phenotypes of neurons, so it becomes a matter of fitting parameters rather than quibbling about what class of model we should be using. (And the HH models connect directly with measurable quantities like currents and voltages and spikes etc).

    I agree with your intuitions about development and brains in vats.

  5. 5. Mikhail says:

    Sorry if this is off topic. I can’t find a way of leaving a message or comment for the “owner” other than in a post. Perhaps that would be a good addition to the blog? Along with a list of most recent comments. What I wanted to say is that there is a very good definition of consciousness which is missing from the blog. “Consciousness as self-awareness”…

  6. 6. Alex says:

    Hi! I’m a genius. I have a recepy for everything into one thing. I’m depressed and am looking for the first one to keep up with my stuff.

    This posts has common facts with the following from my site:

    -symbolic cognition: POST 1 (vortexes), www dot scatteredpiecesof1dying.blogspot dot com

    -Enviroenment recognition. It has connections with recent discoveries in mathematics, topology (poincaré conjecture, “surgery”). An analogy can be made from that into philosophy. Inverse kinematics and forward kinematics are logics used in motion. Artificial intelligenge researchers have trouble in designing feet. Their analytics are in inverse kinematics logics. Their constructive work follows not unitary systems that follow strictly forward kinematics logics. People never have a coherent perspective that is defined philosophically from forward kinematics (metaphysics) and inverse kinematics (dialectics). The analogy is about applying surgery to a dialectic need – the foot. You must first take the entire perspective that is used in analyzing biology – dialectics, or inverse kinematics – and translate the defined problem into forward kinematics, where you can make the “surgery”, meaning cutting out the problems, for solving later, or giving them away to chance, into a selected dialectic medium.

    There is no active scientifical system of philosophy that regulates all the perspective from one view to the other. The current state of philosophy is below a single absolute view, let along translating at will from metaphysical to dialectics.

    In the first post in my blog I say about the possibilities of the level 4.2 of evolution, that includes tools that are what is needed in the philosophical world, that is actually at the level 2, if it would ever help science and artificial intelligence research.

    In the 15’th post I’m making a schema of the exact relation between the forward kinematics definition systems, aka metaphysics, and the inverse kinematics definition systems, aka dialectics.

    -a brain in a jar

    In my first post I’m differentiating between 3 types of information, that correspond to 3 types of consciousness. A brain in a jar would have access only to the second type, mathematical. Large masses of information of the 3’d type, declarative memory, are needed to construct a kinetic consciousness, that is the type that binds all the 3 types of information into one homogenous type of consciousness that has access to all the resources a man has in society. The kinetic consciousness evolves into pure consciousness, that has only type 1 information and engulfs all the rest of brain processes, slowly.

    With the semantic intepretation models that are available, everybody reaches a circumstantial personal maximum of evolution in the frame of kinetic consciousness purified to a certain degree. But a silicon brain in a jar would never have the huge type 3 information needed. It has to make sports, swim, feel the wind, the earth, the atmospheric pressure, etc. for a number o years before it can develop huge, huge, huge dialectic, inverse kinematics patterns, and only afterwards start to make complicated processes that involve switching between kinematics logics.

    Humans themselves are still not mentally evolved sufficiently to understand their brains and reproduce them.

    I’m slowly dying like a brain in a jar… reach for me.. www[dot]scatteredpiecesof1dying.blogspot[dot]com

  7. 7. Blue Devil Knight says:

    Alex: I’d like to subscribe to your newsletter.

  8. 8. Peter says:

    Thanks, BDK: fair points.

    I’ll think about your suggestions, Mikhail: in the meantime I don’t mind people using comments on a post to make more general remarks: or you can email me – peter+’at’+you know where.

    Additions to my list of definitions of consciousness are very welcome, but I need a source to mention.

    Good luck, Alex!

  9. 9. Addofio says:

    Based not on biology, but on cognitive science, I think you’re right that the development of consciousness as we know it depends as much on having a body as on having a brain. For one thing, I’m not sure a brain–a complex brain, in any case–would evolve in the absence of a body. What would the survival value be? How would it come into being in the first place? And not just any body–but a body capable of acting on the world. There may be a reason plants have no brains.

    With regard to symbolization–what you said put me in mind of Jerome Bruner’s work Beyond the Information Given. He believed that as we develop from infancy, the first kind of abstraction we develop is kinesthetic, based on actions. He gives an example of a small boy representing the concept of a shovel ( think it was–it may have been the hole) by going through the motions of digging a hole with a shovel. As we grow up, action continues to be a very powerful mode of understanding things. In fact, it occurs to me that the fact that people are working on trying to make robots that can do what biological systems do in part to better understand how biological systems work–including some of the mysteries of consciousness–is an example of the principle of understanding-through-doing.

    I’d add one other necessary ingredient for understanding human consciousness–and that’s the fact that we are a social species. Not only would a human brain not develop normally in a vat, it does not develop normally in the absence of being raised by and among a group of humans, as is demonstrated in the literature on feral children. I find octopi fascinating because they have developed a fairly impressive level of intelligence apparently as a result of being predators and being able to manipulate things with their tentacles (my own notion, that.) But they are not a social species, and therefore there are all kinds of fascinating questions they pose for thinking about consciousness, staarting with–are they conscious? I tend to assume “yes”–I think it’s the most parsimonious assumption–but how could we know? Or even conceivably generate any evidence relevant to the question? And if they are–how is their consciousness like, and unlike, ours? For instance–do they have any kind of abstract symbolization, and if so–why, and how does it develop? And I’m sure you could ask even better questions.

  10. 10. Peter says:

    Yes indeed. The Jerome Bruner point reminds me, in turn, of an explanation Vygotsky gives somewhere about learning to point. A baby wants something it can’t reach, but stretches its arm out anyway. Mother, following the line of the arm, sees what Baby wants and passes it over. Baby learns that this is a new way of getting things, and before long has refined the trick into genuine self-conscious pointing.

Leave a Reply