A bit of an update in Seed magazine on the Blue Brain project. This is the project that set out to simulate the brain by actually reproducing it in full biological detail down to the behaviour of individual neurons and beyond: with some success, it seems.
The idea of actually simulating a real brain in full has always seemed fantastically ambitious, of course, and in fact the immediate aim was more modest: to simulate one of the columnar structures in the cortex. This is still an undertaking of mind-boggling complexity: 10,000 neurons, 30 million synaptic connections, using 30 different kinds of ion channel. In fact it seems the ion channels were one of the problem areas; in order to get good enough information about them, the project apparently had to set up its own robotic research. I hope the findings of this particular bit of the project are being published in a peer-reviewed journal somewhere.
However, the remarkable thing is that it worked: eventually the simulated column was created and proved to behave in the same way as a real one. So is the way open for a full brain simulation? Not quite. Even setting aside the many structural challenges which surely remain to be unravelled (don’t they – the brain isn’t simply an agglomeration of neocortical columns?) Henry Markram, the project Director, estimates that an entire brain would require the processing of 500 petabytes of data, way beyond current feasibility. Markram believes that within ten years, the inexorable increase in computing power will make this a serious possibility. Maybe: it doesn’t pay to bet against Moore’s Law – but I can’t help noticing that there has been a big historical inflation in the estimated need, too. Markram now wants 500 petabytes: a single petabyte is 1015 bytes; but in 1950 Turing thought that 1015 bits represented the highest likely capacity of the brain, with about 109 enough for a machine which could pass the Turing Test. OK, perhaps not really a fair comparison, since Turing had nothing like Blue Brain in mind.
One criticism of the project asks how it judges its own success – or rather, suggests that the fact that it does judge its own success is a problem. If we had a full brain which could operate a humanoid robot and talk to us, there would be no doubt about the success of the project; but how do we judge whether a simulated neuronal column is actually working? The project team say that their conclusions are based on scrupulous comparisons with real biological brains, and no doubt that’s right; but there’s still a danger that the simulation merely confirms the expectations fed into it. They came up with an idea of how a column works; they built something that worked like that: and behold, it works how they think a column works.
There is also undeniably something a bit strange about the project. Before Blue Brain was ever thought of, proponents of AI would sometimes use the idea of a total simulation as a kind of thought-experiment to establish the merely neurological nature of the mind. OK, there might be all these mechanisms we didn’t understand, and emergent phenomena, and all the rest, but at the end of the day, what if we just simulated everything? Surely then you’d have to admit, we would have made an artificial mind – and what was to stop us, except practicality? It was an unexpected development back in 2005 when someone actually set about making this last-ditch argument a reality. It is unique; I can’t think of another case where someone set out to reproduce a biological process by building a fully detailed simulation, without having any theory of how the thing worked in principle.
This raises some peculiar possibilities. We might put together the full Blue Brain; it might be demonstrably performing like a human brain, controlling a robot which walked around and discussed philosophy with us, yet we still wouldn’t know how it did it. Or, worse perhaps, we might put it all together, see everything working perfectly at a neuronal level, and yet have our attached robot standing slack-jawed or rolling around in a fit, without our being able to tell why.
It may seem unfair to describe Markram and his colleagues as having no theory, but some of his remarks in the article suggest he may be one of those scientists who doesn’t really get the Hard Problem at all.
…It’s the transformation of those cells into experience that’s so hard. Still, Markram insists that it’s not impossible. The first step, he says, will be to decipher the connection between the sensations entering the robotic rat and the flickering voltages of its brain cells. Once that problem is solved—and that’s just a matter of massive correlation—the supercomputer should be able to reverse the process. It should be able to take its map of the cortex and generate a movie of experience, a first person view of reality rooted in the details of the brain…
It could be that Markram merely denies the existence of qualia, a perfectly respectable point of view; but it looks as if he hasn’t really grasped what they are, and that they can’t be captured on any kind of movie. Perhaps this outlook is a natural or even a necessary quality of someone running this kind of project. I suppose we’ll have to wait and see what happens when he gets his 500 petabyte capacity.