I ain’t got no body

Brain in a jar. You probably read about the recent experiment which apparently simulated enough neurons for half a mouse brain running for the equivalent of one second of mouse time – though that required ten seconds of real time on a massively powerful supercomputer. Information is scarce: we don’t know how detailed the simulation was or how well the simulated neurons modelled the behaviour of real mouse neurons (it sounds as though the neurons operated in a void, rather than in a simulated version of the complex biological environment which real neurons inhabit, and which plays a significant part in determining their behaviour). No attempt was made to model any of the structure of a mouse brain, though it is said that some signs of patterns of firing were detected, suggesting that some nascent self-organisation might have been under way – on an optimistic interpretation.

I don’t know how this research relates to the Blue Brain project, which has just about reached what the researchers consider a reasonable neuron-by-neuron electronic simulation of one of the cortical columns in the brain of a juvenile rat. The people there emphasise that the project aims to explore the operation of neuronal structures, not to produce a working brain, still less an artifical consciousness, in a computer.

It is not altogether clear, of course, how well an isolated brain – even a whole one – would work (another unanswered question about the simulation is whether it was fed with simulated inputs or left to its own devices). A brain in a jar, dissected out of its body but still functioning, is a favourite thought-experiment of some philosophers; but others denounce the idea that consciousness is to be attributed to the brain, rather than the whole person, as the ‘mereological fallacy’. A new angle on this point of view has been provided by Rolf Pfeifer and Josh Bongard in How the body shapes the way we think. Their title is actually slightly misleading, since they are not concerned primarily with human beings, but seek instead to provide a set of design principles for better robots, drawing on the history of robotics and on their own theoretical insights. They’re not out to solve the mystery of consciousness, either, but their ideas are of some interest in that connection.

In essence, Pfeifer and Bongard suggest that many early robots relied too much on computation and not enough on bodies and limbs with the right kind of properties. They make several related points. One is the simple observation that sometimes putting springs in a robot’s legs works better than attempting to calculate the exact trajectory its feet need to follow to achieve perfect locomotion. In fact, a great deal can be accomplished merely by designing the right kind of body for your robot. Pfeifer and Bongard cite the ‘passive dynamic walkers’ which achieve a human-style bipedal gait without any sensors or computation at all (they even imply, bizarrely, that the three patron saints of Zurich might have worked on a similar principle: apparently legend relates that once their heads were cut off, the holy three walked off to the site of the Grossmünster church). Similarly, the canine robot Puppy produces a dog-like walk from a very simple input motion, so long as its feet are allowed to slip a bit. Human babies are constructed in such a way that even random neural activity is relatively likely to make their hands sweep the area in front of them, grasp an object, and bring it up towards eyes and mouth: so that this exploratory behaviour is inherently likely to arise in babies even if it is not neurally pre-programmed.

Another point is that interesting (and intelligent) behaviour emerges in response to the environment. Simple block-pushing robots, if designed a certain way, will automatically push the blocks in their environment together into groups. This behaviour, which is really just a function of the placement of sensors and the very simple program operating the robots, looks like something achieved by relatively complex computation, but really just emerges in the right environment.

Thins are looking bleak for the brain in the jar, but there is a much bolder hypothesis to come. Pfeifer and Bongard note that a robot such as Puppy may start moving in an irregular, scrambling way, but gradually falls into a regular trot: in fact, for different speeds there may be several different stable patterns: a walk, a run, a trot, a gallop. These represent attractor states of the system, and it has been shown that neural networks are capable of recognising these states. Pfeifer and Bongard suggest that the recognition of attractor states like this represents the earliest emergence of symbolic structures; that cognitive symbol manipulation arises out of recognising what your own body is doing. Here, they suggest, lies the solution to the symbol grounding problem; of intentionality itself.

If that’s true, then a brain in a jar would have serious problems: moreover, simulating a brain in isolation is very likely to be a complete waste of time because without a suitable body and an environment to move around in, symbolic cognition will just never get going.

Is it true? The authors actually offer relatively little in the way of positive evidence or argument: they tell a plausible story about how cognition might arise, but don’t provide any strong reasons to think that it is the only possible story. I’m not altogether sure that their story goes as far as they think, either. Is the ability to respond to one’s body falling into certain attractor states a sign of symbol manipulation, any more than being able to respond to a flash of light or a blow on the knee? I suspect that symbols require a higher levl of abstraction, and the real crux of the story is in how that is achieved – something which looks likely, prima facie, to be internal to the brain.

But I think if I were running a large-scale brain simulation project, these ideas might cause me some concern.

Loopy

Douglas Hofstadter. With I am a Strange Loop, Douglas Hofstadter returns – loops back? to some of the concerns he addressed in his hugely successful book Gödel, Escher, Bach, a book which engaged and inspired readers around the world. Despite the popularity of the earlier book, Hofstadter feels the essential message was sometimes lost; the new book started out as a distilled restatement of that message, shorn of playful digressions, dialogues, and other distractions. It didn’t quite turn out that way, so the new book is much more than a synopsis of the old. However, the focus remains on the mystery of the self.

Is it a mystery? In a way I wondered why Hofstadter thought there was any problem about it. Talking about myself is just a handy way of picking out a particular human animal, isn’t it? In fact it doesn’t have to be human. We might say of a dog that “He wants to get in the basket himself”; for that matter we might say of a teacup “It just fell off the shelf by itself”. Why should talk of I, myself, me, evoke any more philosophical mystery than talk of she, herself, her?

I can think of three bona fide mysteries attached to the conscious self: agency (or free will if you like); phenomenal experience (or qualia); and intentionality (or meaning). Hofstadter is unimpressed by any of these. Qualia and free will get a quick debunking in a late chapter: Hofstadter shares his old friend Dennett’s straight scepticism about qualia, and as for free will, what would it even mean? Intentionality gets a more respectful, but more glancing treatment Hofstadter proposes the analogy of the careenium, which is to be imagined as kind of vast, frictionless billiard table on which hordes of tiny magnetic balls, know as simms, roll around. Occasionally the edge of the careenium may be hit by objects in the exterior world, imparting new impulses to the simms and possibly causing them to agglomerate into big rolling masses, or simmballs (the attentive reader will notice that a strong focus on the core message has by no means excluded a fondness for artful puns and wordplay). Evidently impulses from the outside world spark off neural symbols, whose interaction gives rise to our thoughts.

It would not be fair to criticise the sketchiness of this account, because it isn’t what Hofstadter is on about: it isn’t even the main point of the careenium metaphor. He acknowledges, too, that some will think talk of symbols in the brain leads to the assumption of an homunculus to read them, and to other difficulties, and he briefly indicates a response. The point really is that for Hofstadter all this is largely beside the point.

So what is the real point, and why does Hofstadter find the self worthy of attention? For him it is all about greatness of soul; friendship and the sharing, the sympathetic entering into, of other consciousnesses. This is where the loops come in.

Hofstadter quotes several examples of self-referential loops, re-creating, for example, experiments with video feedback which he first carried out many years ago. More substantially, he gives an account of how Kurt Gödel managed to get self-reference back into logical systems like the one set out in Russell and Whitehead’s Principia Mathematica. PM, as Hofstadter calls it, was a symbolic language like a logical algebra, which the authors used to show how arithmetic could be built up out of a few simple logical operations. One of its distinguishing features was that it included special rules which were meant to exclude self-reference, because of the paradoxes which readily arise from sentences or formulae that talk about themselves (as in ‘This sentence is false.’). By arithmetising the notation and applying some clever manoevres, Gödel was able to reintroduce self-reference into the world of PM and similar systems (a disaster so far as the authors’ aspirations were concerned) and incidentally went on to prove the existence in any such system of true but unprovable assertions.

This is one of the most interesting sections of the book, though not the easiest to read. It’s certainly enlivened, however, by the grudge Hofstadter seems to have against Russell: I found myself wondering whether Grammaw Hofstadter could have been the recipient of unwelcome attentions from the aristocratic logician at some stage. I’ve read many accounts which suggest that Bertrand Russell might have been, say, a touch self-centred: but it’s a novelty to read the suggestion that he might have been a bit dumb in certain respects. In this account, Gödel is the young Turk who explodes the enlivening force of self-reference within Russell’s gloomy castle: no mention here of how Russell himself had previously done something similar to the structure erected by Frege, so that Russell also has a claim to be considered a champion of paradox.

People are clearly examples of self-referential systems, but is self-reference an essential part of consciousness or merely a natural concomitant? It is plausible that self-reference might help explain why our thoughts, for example, seem to pop out of nowhere: when we try to locate the source, we get trapped into looking down a regress like the infinite corridor which appears in the video feedback. Hofstadter’s loops, moreover, are no ordinary loops – they are strange. A strange loop violates some hierarchical order, perhaps with containers being contained, or steps from a higher level leading up to lower ones. It is our symbolic ability which gives us the ability to create strange loops: to symbolise our own symbolic system, and so on. However, we are unable to perceive the lower-level neural operations which constitute our thoughts, and we therefore suffer the impression of mysteriously efficacious, self-generating thoughts and decisions, which we cannot reconcile with the sober scientific account of the physical events which really carry the force of causality.

Hofstadter spends some time on this issue of different levels of interpretation, seeking to persuade us that the existence of complete physical or neural stories does not stop accounts of causality in terms of higher-level entities (thoughts, intentions, and so on) from being salient and reasonable. He argues, intriguingly, that there is an analogy here with what Gödel did: by reinterpreting PM at a higher level, he was able to draw conclusions and in a sense set limits to what could be done within PM. In the same way we can legitimately say that our intentions were pushing the neurons around, not just that the neurons were pushing the intentions. This idea of downward causality across levels of interpretation seems metaphysically interesting, though I think it is quite redundant: the relation between entities at different levels of explanation is identity, not causation (though I suppose you can see identity as the basic case of causation if you wish).

Now things get more difficult. Hofstadter argues that bits of our own loops get echoed in those of other people, and that therefore we exist in them as well as in our own brains. If two video cameras and screens are looping and enter each other’s view, then the loop of camera A will be running, in miniature, in that of camera B, and vice versa. Our engagement with other people thus helps to create and sustain us, and in fact it means that death itself is not as abrupt as it would otherwise be, the afterglow of our own strange loops continuing to spin in the minds of others and thereby sustaining our continued existence to some limited extent. This theorising is attached to an account of Hofstadter’s reaction to the sudden, and tragically early, death of his wife.

Hofstadter does not believe that his views about death have been fundamentally influenced by this awful experience: he says that his conclusions were largely drawn before his wife’s sudden illness. All the same, the context makes it more difficult to say that, in fact, the reasoning here is wholly unconvincing. It’s just not true that the image of a feedback loop in another feedback loop remains a viable feedback loop in itself.

Hofstadter offers some arguments designed to shake our belief in the simple equation of body with person (he is happy to use the word ‘soul’, though without its usual connotations), including the one – rather tired in my view – of the magic scanners which can either relocate or duplicate us perfectly. Where a person is duplicated like this, he says, it makes no sense to insist that one of the two bodies must be the real person. “…to believe in such an indivisible, indissoluble “I” is to believe in nonphysical dualism” . Not necessarily: it might instead be to assert the most brutal kind of physical monism: we are physical objects, and our existence depends utterly on our physical continuity.

I’m afraid I therefore end up personally having a good deal of sympathy with many of the incidental things Hofstadter says while remaining unconvinced by the points he regards as most important. Whether or not you accept his case, though, Hofstadter has a remarkable gift for engaging and lively exposition, more of which will always be welcome.