Picture: Brains. Robots.net recently featured the Whole Brain Emulation Roadmap (pdf) produced by the Future of Humanity Institute at Oxford University. The Future of Humanity Institute has a transhumanist tinge which I find slightly off-putting, and it does seem to include fiction among its inspirations, but the Roadmap is a thorough and serious piece of work, setting out in summary at least the issues that would need to be addressed in building a computer simulation of an entire human brain. Curiously, it does not include any explicit consideration of the Blue Brain project, even in an appendix on past work in the area, although three papers by Markram, including one describing the project, are cited.

One interesting question considered is: how low do you go? How much detail does a simulation need to have? Is it good enough to model brain modules (whatever they might be), neuronal groups of one kind of another, neurons themselves, neurotransmitters, quantum interactions in microtubules? The roadmap introduces the useful idea of scale separation; there might be one or more levels where there is a cut-off, and a simulation in terms of higher level entities does not need to be analysed any further. Your car depends on interactions at a molecular level, but in order to understand and simulate it we don’t need to go below the level of pistons, cylinders, etc. Are there any cut-offs of this kind in the brain? The road map is not meant to offer answers, but I think after reading it one is inclined to think that there is probably a cut-off somewhere below neuronal level; you probably need to know about different kinds of neurotransmitters, but probably don’t need to track individual molecules. SOmething like this seems to have been the level which the Blue Brain settled on.

The roadmap merely mentions some of the philosophical issues. It clearly has in mind the uploading of an individual consciousness into a computer, or the enhancement or extension of a biological brain by adding silicon chips, so an issue of some importance is whether personal identity could be preserved across this kind of change. If we made a compter copy of Stephen Hawking’s brain at the moment of his death, would that be Stephen Hawking?

The usual problem in discussions of this issue is that it is easy to imagine two parallel scenarios; one in which Hawking dies at the moment of transition (perhaps the destruction of his brain is part of the process), and one in which the exact same simulation is created while he continues his normal life. In the first case, we might be inclined to think that the simulation was a continuation, in the latter case it’s more difficult; yet the simulation in both cases is the same. My inclination is to think that the assertion of continuing identity in the first case is loose; we may choose to call it Hawking, but even if we do, we have to accept that it’s Hawking put through a radical alteration.

Of course, even if the simulation hasn’t got Hawking’s personal identity, having a simulation of his brain (or even one which was only 80% faithful) woud be a fairly awesome thing.

The roadmap provides a useful list of assumptions. One of these is:

Computability: brain activity is Turing?computable, or if it is uncomputable, the uncomputable aspects have no functionally relevant effects on actual behaviour.

I’ve come to doubt that this is probable. I cannot present a rigorous case, but in sloppy impressionistic terms the problem is as follows. Non-computable problems like the halting problem or the tiling problem seem intuitively to involve processes which when tackled computationally go on forever without resolution. Human thought is able to deal with these issues by being able to ‘see where things are going’ without pursuing the process to the end.

Now it seems to me that the process of recognising meanings is very likely a matter of ‘seeing where things are going’ in much the same way. Computers don’t deal with meaning at all, although there are cunning ploys to get round this in the various areas where it arises. The problem may well be that meanings are indefinitely ambiguous; there are always some more possible readings to be eliminated, and this might be why meaning is so untractable by computation.

Of course, apart from the hand-waving vagueness of that line of thought, it leaves me with the problem of explaining how the problem would manifest itself in the construction of a whole brain simulation; there would presumably have to be some properties of a biological brain which could never be accurately captured by a computational simulation. There are no doubt some fine details of the brain which could never be captured with perfect accuracy, but given the concept of scale separation,it’s hard to see how that alone would be a fatal problem.

When a whole brain simulation is actually attempted, the answer will presumably emerge; alas, according to the estimates in the road map, I may not live to see it.


  1. 1. Derek James says:

    Human thought is able to deal with these issues by being able to ’see where things are going’ without pursuing the process to the end.

    You’re just talking about estimating look-ahead or prediction, which is already a feature of some AI systems. Even if it weren’t, why would this seem like something that would be difficult or impossible to instantiate in a computer?

    What is a concrete example of humans solving a type of halting problem that could not be practically simulated on a computer?

  2. 2. steevithak says:

    Regarding the “how low do you go” question, I seem to recall Dennett proposing in “Consciousness Explained” that consciousness was “implemented” in a virtual von Neumann machine that runs on top of the parallel neural system provided in our brain. If he’s right and if creating a conscious machine is the goal, perhaps we don’t need to be too concerned about exactly duplicating low level brain structures so much as whatever it is that runs in that higher level serial virtual machine.

    On the other hand, trying to exactly duplicate the brain at the lowest level possible may be the shortest route to understanding how consciousness arises from the brain in the first place. We often don’t seem to be making much progress by other methods. I posted an article on neural network hardware last year that described a Stanford chip that simulates a 256×256 array of biological neurons down to the chemical level including protein pores and ion channels.

  3. 3. Peter says:

    Derek – no, I accept that various kinds of prediction can be done computationally. I’m thinking specifically of non-computable problems.

    Any problem which doesn’t, in fact halt, and can be seen by human reasoning to go on forever, is an example. But expounding this properly would take too much space for a comment (and to be honest, too much thinking time for me – I’m no expert) – would you allow me to pick this up in another post in a week or so?

    As I say, I’m not claiming here that I have a way of showing that the interpretation of meaning is formally equivalent to tiling the plane, or anything like that: these are just intuitions which you may not share.

    steevithak – Dennett does say that, although I think it’s an uncharacteristically muddled piece of thinking. If I can digress a bit, here’s why. Dennett seems to use ‘von Neumann’ at that point simply to mean ‘serial’, which is odd – whether your computer follows the von Neumann or the Harvard architecture, and whether it’s parallell or serial are really different issues. But assuming we’re talking about serial processing, why on earth would anyone implement a virtual serial computer on a parallel one? Parallel and serial computing produce the same results, just (sometimes) at different speeds. My guess is that Dennett has confused the process with the output and is thinking that to output a single-threaded, serial ‘stream of consciousness’ requires a serial processor. But that’s not true at all.

    Anyway, getting back to the issue, the point you make is quite right. If we’re only going for consciousness, we might be able to do it with structures that are nothing like the human brain. In the case of the road map, though, whole brain emulation is explicitly the goal, so the blueprint for our simulation has to match the blueprint for the brain at some level, though the individual components might be realised in entirely different ways.

  4. 4. Lloyd Rice says:

    It seems to me that the essential distinction to be made in how a problem is tackled by one or the other kind of computer is a matter of how the problem gets structured. We tend to think of programs as being organized like you might state an algorithm, how to do long division or find prime numbers, for example. But the mind seems to approach things more like from the outside, like constructing a general description of what’s going on before tackling the details. I believe it is this which makes the halting problem, NP hard, etc., irrelevant to the way humans usually look at the world. And I see no reason that a computer could not be organized to work the same way, tho in some ways, it’s a much harder was to solve some problems. It is at this sort of level that I believe the essential differences lie between humans and computers (as currently conceived), not at the deep levels of molecules, microtubules, even neurons or logic gates.

  5. 5. Derek James says:

    Any problem which doesn’t, in fact halt, and can be seen by human reasoning to go on forever, is an example. But expounding this properly would take too much space for a comment (and to be honest, too much thinking time for me – I’m no expert) – would you allow me to pick this up in another post in a week or so?

    Sure thing…I’ll be looking forward to it.

  6. 6. Mike says:

    This is an excellent discussion on the computer simulation of the brain. I have my doubts that we will see a computer brain that will actually be conscious and function like a real brain. We really don’t know how much of our world we need to emulate to get a good simulation (chemistry, physics etc.). I doubt it will be possible to drastically simplify things and still get a working model as Ray Kurzweil seems to think. Here’s what Kurzweil says in his book, the “singularity is near”.
    “Modeling human-brain functionality on a nonlinearity-by-nonlinearity and synapse-by-synapse basis is generally not necessary. Preserving the exact shape of every dendrite and the precise squiggle of every interneuronal connection is generally not necessary. We can understand the principles of operation of extensive regions of the brain by examining their dynamics at the appropriate level of detail.”
    Check out my neurotechnology blog too. It might have some stuff that would be of interest.

  7. 7. Christophe Menant says:

    All these brain emulation activities look pretty powerful but I’m still under the impression that they are missing a key point to be productive.
    Why look at today brains without considering how they came up thru evolution? The whole is more than the sum of the parts.
    What would be our understanding of matter, atoms and particles if the scientific community had decided not to consider the evolution of the universe? Same for biology vs the evolution of life.
    On the same token, the reverse engineering of a microprocessor brings nothing if you don’t know sufficiently its manufacturing process. Same for a car, a building, a wine, anything…
    So it is really surprising that all the efforts invested in the analysis of human mind and consciousness tend to forget the evolutionary components.
    Is this an indirect consequence of the lack of interest that the founding fathers of phenomenology and analytic philosophy had for evolution ? Could be…(See Cunningham, S. (1996). “Philosophy and the Darwinian Legacy”. University of Rochester Press).
    But bad stories have exceptions (http://www.consc.netlook/mindpapers/8.8a)

  8. 8. b b prakash says:

    Folks : At some point all the comments tend to look like we are geting no where so here’s a take on Descartes to lighten the mood. ” I think therefore I think I am”

  9. 9. Michael Baggot says:

    There is a rather nasty, self-deluding trick when it comes to understanding how the brain computes. This trick leads us to believe that brains are strictly deductive engines like ordinary computers. But brains are in fact both inductive, i.e., rule generating, and deductive, i.e., rule applying or sorting engines. The trick lies in the fact that in the moment of inductive insight when the rules are discovered the problem’s inductive essence is lost forever and it will now be known as a matter of ordinary deduction. Thus – if we are not careful – it will always be very tempting to argue that the machine that understands now could always have understood if only given enough time.

  10. 10. adult dating in Ireland says:

    adult dating in Ireland…

    […]Conscious Entities » Blog Archive » Whole Brain Emulation[…]…

Leave a Reply