Bridging the Brain

How can we find out how the brain works? This was one of five questions put to speakers at the Cognitive Computational Neuroscience conference, and the answers, posted on the conference blog, are interesting. There seems to be a generally shared perspective that what we need are bridges between levels of interpretation, though there are different takes on what those are likely to be and how we get them. There’s less agreement about the importance of recent advances in machine learning and how we respond to them.

Rebecca Saxe says the biggest challenge is finding the bridge between different levels of interpretation – connecting neuronal activity on the one hand with thoughts and behaviour on the other. She thinks real progress will come when both levels have been described mathematically. That seems a reasonable aspiration for neurons, though the maths would surely be complex; but the mind boggles rather at the idea of expressing thoughts mathematically. It has been suggested in the past that formal logic was going to be at least an important part of this, but that hasn’t really gone well in the AI field and to me it seems quite possible that the unmanageable ambiguity of meanings puts them beyond mathematical analysis (although it could be that in my naivety I’m under-rating the subtlety of advanced mathematical techniques).

Odelia Schwartz looks to computational frameworks to bring together the different levels; I suppose the idea is that such frameworks might themselves have multiple interpretations, one resembling neural activity while another is on a behavioural level. She is optimistic that advances in machine learning open the way to dealing with natural environments: that is a bit optimistic in my view but perhaps not unreasonably so.

Nicole Rust advocates ‘thoughtful descriptions of the computations that the brain solves’. We got used, she rightly says, to the idea that the test of understanding was being able to build the machine. The answer to the problem of consciousness would not be a proof, but a robot. However, she points out, we’re now having to deal with the idea that we might build successful machines that we don’t understand.
Another way of bridging between those levels is proposed by Birte Forstmann: formal models that make simultaneous predictions about different modalities such as behavior and the brain. It sounds good, but how do we pull it off?

Alona Fyshe sees three levels – neuronal, macro, and behavioural – and wants to bring them together through experimental research, crucially including real world situations: you can learn something from studying subjects reading a sentence, but you’re not getting the full picture unless you look at real conversations. It’s a practical programme, but you have to accept that the correlations you observe might turn out complex and deceptive; or just incomprehensible.

Tom Griffiths has a slightly different set of levels, derived from David Marr; computational, algorithmic, and implementation. He feels the algorithmic level has been neglected; but I’d say it remains debatable whether the brain really has an algorithmic level. An algorithm implies a tractable level of complexity, whereas it could be that the brain’s ‘algorithms’ are so complex that all explanatory power drains away. Unlike a computer, the brain is under no obligation to be human-legible.

Yoshua Bengio hopes that there is, at any rate, a compact set of computational principles in play. He advocates a continuing conversation between those doing deep learning and other forms of research.
Wei Ji Ma expresses some doubt about the value of big data; he favours a diversity of old-fashioned small-scale, hypothesis-driven research; a search for evolutionarily meaningful principles. He’s a little concerned about the prevalence of research based on rodents; we’re really interested in the unique features of human cognition, and rats can’t tell us about those.

Michael Shadlen is another sceptic about big data and a friend of hypothesis driven research, working back from behaviour; he’s less concerned with the brain as an abstract computational entity and more with its actual biological nature. People sometimes say that AI might achieve consciousness by non-biological means, just as we achieved flight without flapping wings; Shadlen, on that analogy, still wants to know how birds fly.

If this is a snapshot of the state of the field, I think it’s encouraging; the outlooks briefly indicated here seem to me to show good insight and promising outlooks. But it is possible to fear that we need something more radical. Perhaps we’ll only find out how the brain works when we ask the right questions, ones that realign our view in such a way that different levels of interpretation no longer seem to be the issue. Our current views are dominated by the concept of consciousness, but we know that in many ways that is a recent idea, primarily Western and perhaps even Anglo-Saxon. It might be that we need a paradigm shift; but alas I have no idea where that might come from.

Is the brain understandable?

Can we, one day, understand how the neurology of the brain leads to conscious minds, or will that remain impossible?

Round here we mostly discuss the mind from a top-down, philosophical perspective; but there is another way, which is to begin by understanding the nuts and bolts and then gradually working up to more complex processes. This Scientific American piece gives a quick view of how research at the neuronal level is coming along (quite well, but with vastly more to do).

Is this ever going to tell us about consciousness, though? A point often quoted by pessimists is that we have had the complete ‘wiring diagram’ of the roundworm Caenorhabditis elegans for years (Caenorhabditis has only just over 300 neurons and they have all been mapped) but still cannot properly explain how it works. Apparently researchers have largely given up on this puzzle for now. Perhaps Caenorhabditis is just too simple; its nervous system might be quirky or use elegant but opaque tricks that make it particularly difficult to fathom. Instead researchers are using fruit fly larvae and other creatures with nervous systems that are simple enough to deal with, but large enough to suggest that they probably work in a generic way, one that is broadly standard for all nervous systems up to and including the human. With modern research techniques this kind of approach is yielding some actual progress.

How optimistic can we be, though? We can never understand the brain by knowing the simultaneous states of all its neurons, so the hope of eventual understanding rests on the neurology of the brain being legible at some level. We hope there will turn out to be functions that get repeated, that firm building blocks of some intelligible structure; that we will be able to deduce rules or a kind if grammar which will let us see how things work on a slightly higher level of description.

This kind of structure is built into machines and programs; they are designed to be legible by human beings and lend themselves to reverse engineering. But the brain was not designed and is under no obligation to construct itself according to regular plans and principles. Our hope that it won’t turn out to be a permanently incomprehensible tangle rests on several possibilities.

First, it might just turn out to be like that. The computer metaphor encourages us to think that the brain must encode its information in regular ways (though the lack of anything strongly analogous to software is arguably a fly in the ointment). Perhaps we’ll just get lucky. When the structure of DNA was discovered, it really seemed as if we’d had a stroke of luck of this kind. What amounted to a long string of four repeated characters, ones that given certain conditions could be read as coding for many different proteins; it looked like we had a really clear legible system of very general significance. It still does to a degree, but my impression is that the glad confident morning is over, and now the more we learn about genetics the more complex and messy it gets. But even if we take it that genetics is a perfect example of legibility, there’s no particular reason to think that the connectome will be as tractable as the genome.

The second reason to be cheerful is that legibility might flow naturally from function. That is, after all, pretty much what happens with organs other than the brain. The heart is not mysterious, because it has a clear function and its structure is very legible in engineering terms in the light of that function. The brain is a good deal more complex than that, but on the other hand we already know of neurons and groups of neurons that do intelligibly carry out functions in our sensory or muscular systems.

There are big problems when it comes to the higher cognitive functions though. First, we don’t already understand consciousness the way we already understand pumps and levers. When it comes to the behaviour of fruit fly larvae, even, we can relate inputs and outputs to neural activity in a sensible way. For conscious thought it may be difficult to tell which neurons are doing it without already knowing what it is they’re doing. It helps a lot that people can tell us about conscious experience, though when it comes to subjective, qualities experience we have to remember that Zombie Twin tells us about his experiences too, though he doesn’t have any. (Then again, since he’s the perfect counterpart of a non-zombie, how much does it matter?)

Second, conscious processing is clearly non-generic in a way that nothing else in our bodies appears to be. Muscle fibres contract, and one does it much like another. Our lungs oxygenate our blood, and there’s no important difference between bronchi. Even our gut behaves pretty generically; it copes magnificently with a bizarre variety of inputs, but it reduces them all to the same array of nutrients and waste.

The conscious mind is not like that. It does not secrete litres of undifferentiated thought, producing much the same stuff every day and whatever we feed it with. On the contrary, its products are minutely specific – and that is the whole point. The chances of our being able to identify a standard thought module, the way we can identify standard functions elsewhere, are correspondingly slight as a result.

Still, one last reason to be cheerful; one thing the human brain is exceptionally good at is intuiting patterns from observations; far better than it has any right to be. It’s not for nothing that ‘seeing’ is literally the verb fir vision and metaphorically the verb for understanding. So exhibiting patterns of neural activity might just be the way to trigger that unexpected insight that opens the problem out…

Dennett recants

Yes, Dennett has recanted. Alright, he hasn’t finally acknowledged that Jesus is his Lord and Saviour. He hasn’t declared that qualia are the real essence of consciousness after all. But his new book From Bacteria to Bach and Back does include a surprising change of heart.

The book is big and complex: to be honest it’s a bit of a ragbag (and a bit of a curate’s egg). I’ll give it the fuller review it deserves another time, but it seems to be worth addressing this interesting point separately. The recantation arises from a point on which Dennett has changed his mind once before. This is the question of homunculi. Homunculi are ‘little people’ and the term is traditionally used to criticise certain kinds of explanation, the kind that assume some module in the brain is just able to do everything a whole person could do. Those modules are effectively ‘little people in your head’, and they require just as much explanation as your brain did in the first place. At some stage many years ago, Dennett decided that homunculi were alright after all, on certain conditions. The way he thought it could work was an hierarchy of ever stupider homunculi. Your eyes deliver a picture to the visual homunculus, who sees it for you; but we don’t stop there; he delivers it to a whole group of further colleagues; line-recognising homunculi, colour-recognising homunculi, and so on. Somewhere down the line we get to an homunculus whose only job is to say whether a spot is white or not-white. At that point the function is fully computable and our explanation can be cashed out in entirely non-personal, non-mysterious, mechanical terms. So far so good, though we might argue that Dennett’s ever stupider routines are not actually homunculi in the correct sense of being complete people; they’re more like ‘black boxes’, perhaps, a stage of a process you can’t explain yet, but plan to analyse further.

Be that as it may, he now regrets taking that line. The reason is that he no longer believes that neurons work like computers! This means that even at the bottom level the reduction to pure computation doesn’t quite work. The reason for this remarkable change of heart is that Terrence Deacon and others have convinced Dennett that the nature of neurons as entities with metabolism and a lifecycle is actually relevant to the way they work. The fact that neurons, at some level, have needs and aims of their own may ground a kind of ‘nano-intentionality’ that provides a basis for human cognition.

The implications are large; if this is right then surely, computation alone cannot give rise to consciousness! You need metabolism and perhaps other stuff. That Dennett should be signing up to this is remarkable, and of course he has a get-out. This is that we could still get computer consciousness by simulating an entire brain and reproducing every quirk of every neuron. For now that is well beyond our reach – and it may always be, though Dennett speaks with misplaced optimism about Blue Brain and other projects. In fact I don’t think the get-out works even on a theoretical level; simulations always leave out some aspect of the thing simulated, and if this biological view is sound, we can never be sure that we haven’t left out something important.

But even if we allow the get-out to stand this is a startling change, and I’ve been surprised to see that no review of the book I’ve seen even acknowledges it. Does Dennett himself even appreciate quite how large the implications are? It doesn’t really look as if he does. I would guess he thinks of the change as merely taking him a bit closer to, say, the evolution-based perspective of Ruth Millikan, not at all an uncongenial direction for him. I think, however, that he’s got more work to do. He says:

The brain is certainly not a digital computer running binary code, but it is still a kind of computer…

Later on, however, he rehashes the absurd but surely digitally-computational view he put forward in Consciousness Explained:

You can simulate a virtual serial machine on a parallel architecture, and that’s what the brain does… and virtual parallel machines can be implemented on serial machines…

This looks pretty hopeless in itself, by the way. You can do those things if you don’t mind doing something really egregiously futile. You want to ‘simulate’ a serial machine on a parallel architecture? Just don’t use more than one of its processors. The fact is, parallel and serial computing do exactly the same job, run the same algorithms, and deliver the same results. Parallel processing by computers is just a practical engineering tactic, of no philosophical interest whatever. When people talk about the brain doing parallel processing they are talking about a completely different and much vaguer idea and often confusing themselves in the process. Why on earth does Dennett think the brain is simulating serial processing on a parallel architecture,  a textbook example of pointlessness?

It is true that the brain’s architecture is massively parallel… but many of the brain’s most spectacular activities are (roughly) serial, in the so-called stream of consciousness, in which ideas, or concepts or thoughts float by not quite in single file, but through a Von Neumann bottleneck of sorts…

It seems that Dennett supposes that only serial processing can deliver a serially coherent stream of consciousness, but that is just untrue. On display here too is Dennett’s bad habit of using ‘Von Neumann’ as a synonym for ‘serial’. As I understand it the term “Von Neumann Architecture” actually relates to a long-gone rivalry between very early computer designs. Historically the Von Neumann design used the same storage for programs and data, while the more tidy-minded Harvard Architecture provided separate storage. The competition was resolved in Von Neumann’s favour long ago and is as dead as a doornail. It simply has no relevance to the human brain: does the brain have a Von Neumann or Harvard architecture? The only tenable answer is ‘no’.

Anyway, whatever you may think of that, if Dennett now says the brain is not a digital computer, he just cannot go on saying it has a Von Neumann architecture or simulates a serial processor. Simple consistency requires him to drop all that now – and a good thing too. Dennett has to find a way of explaining the stream of consciousness that doesn’t rely on concepts from digital computing. If he’s up for it, we might get something really interesting – but retreat to the comfort zone must look awfully appealing at this stage. There is, of course, nothing shameful in changing your mind; if only he can work through the implications a bit more thoroughly, Dennett will deserve a lot of credit for doing so.

More another time.

God Helmet

god-helmetIs God a neuronal delusion? Dr Michael Persinger’s God Helmet might suggest so. This nice Atlas Obscurapiece’ by Linda Rodriguez McRobbie finds a range of views.

The helmet is far from new, partly inspired by Persinger’s observations back in the 1980s of a woman whose seizure induced a strong sense of the presence of God. In the 90s, Persinger decided to see whether he could stimulate creativity by applying very mild magnetic fields to the right hemisphere of the brain. Among other efforts he tried reproducing the kind of pattern of activity he had seen in the earlier seizure and, remarkably, succeeded in inducing a sense of the presence of God in some of his subjects. Over the years he has continued to repeat this exercise and others like it; the helmet doesn’t always induce a sense of God; sometime people fall asleep, sometimes (like Richard Dawkins) they get very little out of the experience. Sometimes they have a vivid sense of some presence, but don’t identify it as God.

Could this, though, be the origin of theism? Do all religious experiences stem from aberrant neuronal activity? It seems pretty unlikely to me. Quite apart from the severely reductive nature of the hypothesis, it doesn’t seem to offer a broad enough account. People arrive at a belief in God by various routes, and many of them do not rely on any sense of immediate presence. For people brought up in religious families, and for pretty much everyone in say, medieval Europe, God is or was just a fact, a natural part of the world. Some people come to belief along a purely rational path, or by one that seeks out meaning for life and the world. Such people may not require any sense of a personal presence of the divine; or they may earnestly desire it without attaining it – without thereby losing their belief. Even those whose belief stems from a sense of mystical contact with God do not necessarily or I think even typically experience it as a personal presence somewhere in the room; they might feel that instead they have ascemded to a higher sphere, or experience a kind of communion which has nothing to do with location.

Equally, or course, that presence in the room may not seem like God. It might be more like the thing under the bed, or like the malign person who just might be in the shadows behind you on the dark lonely path. People who wake from sleep without immediately regaining motor control of their body may have the sense of someone sitting or pressing on their chest; a ‘hag’ or other horrid being, not God. Perhaps Persinger is lucky that his experiments do not routinely induce horror. Persinger believes that the way it works is that by stimulating one hemisphere of the brain, you make the other hemisphere aware of it and this other presence is translated into an external person of some misty kind. I doubt it is quite like that. I do suspect that a sense of ‘someone there’ may be one of the easiest feelings to induce because the brain is predisposed towards it, just as we are predisposed towards seeing faces in random graphic noise. The evolutionary advantages of a mental system which is always ready to shout ‘look behind you!’ are surely fairly easy to see in a world where our ancestors always needed to consider the possibility that an enemy or a predator might indeed be lurking nearby. The attention wasted on a hundred false alarms is easily outbalanced by the life-saving potential of one justified one.

So what is going on? Perhaps not that much after all. Reproducing Persinger’s results has proved difficult in most cases and it seems plausible that the effect actually depends as much on suggestion as anything else. Put people into a special environment, put a mild magnetic buzz into their scalp, and some of them will report interesting experiences, especially if they have been primed in advance to expect something of the kind. It is perfectly reasonable to think that electrical interference might affect the brain, but Persinger’s magnets really are pretty mild and it sees rather unlikely that the fields in question are really strong enough to affect the firing of neurons through the skull and into the rather resistant watery mass of the brain. In addition I would have to say that the whole enterprise has a curiously dated air about it; the faith in a rather simple idea of hemispheric specialisation, the optimistic conviction that controlling the brain is going to turn out a pretty simple business, and perhaps even the love of a good trippy experience, all seem strongly rooted in late twentieth century thinking.

Perhaps in the end the God Helmet is really another sign of an issue which has become more and more evident lately. The strong suggestibility of the human mind means that sometimes even in neurological science, we are in danger of getting the interesting results we really wanted all along, however misleading or ill-founded they may really be?

Male and female brains

A debate from IAI about male and female minds. It is pretty much agreed between the speakers that men’s brains and women’s brains are not really different; the claimed physical differences all come down to size, women being smaller on average. Behavioural and psychological differences exist, but only statistically; if you plot individuals along a line, there is far more overlap than difference. All of that is ably set out by Gina Rippon. Simon Baron-Cohen agrees but wants to reserve some space for the influence of biology, which affects such matters as the incidence of autism. Helena Cronin puts it all down to evolution; you’ve got two strategies, competing for mates or nurturing your offspring; males tend to the first, women to the second, and many evolved differences flow from  that, although human sexes are less distinct than those in some mammal species.

Perhaps the crux of the debate comes when Cronin denies the existence of the ‘glass ceiling’; fewer women get to the board room, she says, because fewer women choose that path. Rippon responds that there is still evidence that applicants with male names are treated more favourably.
At any rate, it seems that if we thought men had a thicker corpus callosum, or differed in brain structure in other ways, we were just wrong.
If you’re thirsting for more controversy on gender, you might want to look at Phillipe Van Parijs’ paper on several apparent disadvantages to being male (via Crooked Timber.)

Markram’s Electric Gland

Henry MarkramThe brain is not a gland. But Henry Markram seems to be in danger of simulating one – a kind of electric gland.

What am I on about? The Blue Brain Project has published details of its most ambitious simulation yet; a computer model of a tiny sliver of rat brain. That may not sound exciting on the face of it, but the level of detail is unprecedented…

The reconstruction uses cellular and synaptic organizing principles to algorithmically reconstruct detailed anatomy and physiology from sparse experimental data. An objective anatomical method defines a neocortical volume of 0.29 ± 0.01 mm3 containing ~31,000 neurons, and patch-clamp studies identify 55 layer-specific morphological and 207 morpho-electrical neuron subtypes. When digitally reconstructed neurons are positioned in the volume and synapse formation is restricted to biological bouton densities and numbers of synapses per connection, their overlapping arbors form ~8 million connections with ~37 million synapses.

The results are good. Without parameter tuning – that is, without artificial adjustments to make it work the way it should work – the digital simulation produces patterns of electrical activity that resemble those of real slivers of rat brain. The paper is accessible here. It seems a significant achievement and certainly attracted a lot of generally positive attention – but there are some significant problems. The first is that the methodological issues which were always evident remain unresolved. The second is certain major weaknesses in the simulation itself. The third is that as a result of these weaknesses the simulation implicitly commits Markram to some odd claims, ones he probably doesn’t want to make.

First, the methodology. The simulation is claimed as a success, but how do we know? If we’re simulating a heart, then it’s fairly clear it needs to pump blood. If we’re simulating a gland, it needs to secrete certain substances. The brain? It’s a little more complicated. Markram seems implicitly to take the view that the role of brain tissue is to generate certain kinds of electrical activity; not particular functional outputs, just generic activity of certain kinds.

One danger with that is a kind of circularity. Markram decides the brain works a certain way, he builds a simulation that works like that, and triumphantly shows us that his simulation does indeed work that way. Vindication! It could be that he is simply ignoring the most important things about neural tissue, things that he ought to be discovering. Instead he might just be cycling round in confirmation of his own initial expectations. One of the big dangers of the Blue Brain project is that it might entrench existing prejudices about how the brain works and stop a whole generation from thinking more imaginatively about new ideas.

The Blue simulation produces certain patterns of electrical activity that look like those of real rat brain tissue – but only in general terms. Are the actual details important? After all, a string of random characters with certain formal constraints looks just like meaningful text, or valid code, or useful stretches of DNA, but is in fact useless gibberish. Putting in constraints which structure the random text a little and provide a degree of realism is a relatively minor task; getting output that’s meaningful is the challenge. It looks awfully likely that the Blue simulation has done the former rather than the latter, and to be brutal that’s not very interesting. At worst it could be like simulating an automobile whose engine noise is beautifully realistic but never moves. We might well think that the project is falling into the trap I mentioned last time: mistaking information about the brain for the information in the brain.

Now it could be that actually the simulation is working better than that; perhaps it isn’t as generic as it seems, perhaps this particular bit of rat brain works somewhat generically anyway; or perhaps somehow in situ the tissue trains or conditions itself, saving the project most of the really difficult work. The final answer to the objections above might be if the simulation could be plugged back into a living rat brain and the rat behaviour shown to continue properly. If we could do that it would sidestep the difficult questions about how the brain operates; if the rat behaves normally, then even though we still don’t know why, we know we’re doing it right. In practice it doesn’t seem very likely that that would work, however: the brain is surely about producing specific control outputs, not about glandularly secreting generic electrical activity.

A second set of issues relates to the limitations of the simulation. Several of the significant factors I mentioned above have been left out; notably there are no astrocytes and no neurotransmitters. The latter is a particularly surprising omission because Markram himself has in the past done significant work on trying to clarify this area in the past. The fact that the project has chosen to showcase a simulation without them must give rise to a suspicion that its ambitions are being curtailed by the daunting reality; that there might even be a dawning realisation internally that what has been bitten off here really is far beyond chewing and the need to deliver has to trump realism. That would be a worrying straw in the wond so far as the project’s future is concerned.

In addition, while the simulation reproduces a large number of different types of neuron, the actual wiring has been determined by an algorithm. A lot depends on this: if the algorithm generates useful and meaningful connections then it is itself a master-work beside which the actual simulation is trivial. If not, then we’re back with the question of whether generic kinds of connection are really good enough. They may produce convincing generic activity, and maybe that’s even good enough for certain areas of rat brain, but we can be pretty sure it isn’t good enough for brain activity generally.

Harking back for a moment to methodology, there’s still something odd in any case about trying to simulate a process you don’t understand. Any simulation reproduces certain features of the original and leaves others out. The choice is normally determined by a thesis about how and why the thing works: that thesis allows you to say which features are functionally necessary and which are irrelevant. Your simulation only models the essential features and its success therefore confirms your view about what really matters and how it all operates. Markram, though, is not starting with an explicit thesis. One consequence is that it is hard to tell whether his simulation is a success or not, because he didn’t tell us clearly enough in advance what it was he was trying to make happen. What we can do is read off the implicit assumptions that the project cannot help embodying in its simulation. To hold up the simulation as a success is to make the implicit claim that the brain is basically an electrical network machine whose essential function is to generate certain general types of neural activity. It implicitly affirms that the features left out of the simulation – notably the vast array and complex role of neural transmitters and receptors – are essentially irrelevant. That is a remarkable claim, quite unlikely, and I don’t think it’s one Markram really wants to make. But if he doesn’t, consistency obliges him to downplay the current simulation rather than foreground it.

To be fair, the simulation is not exactly being held up as a success in those terms. Markram describes it as a first draft. That’s fair enough up to a point (except that you don’t publish first drafts), but if our first step towards a machine that wrote novels was one that generated the Library of Babel (every possible combination of alphabetic characters plus spaces) we might doubt whether we were going in the right direction. The Blue Brain project in some ways embodies technological impatience; let’s get on and build it and worry about the theory later. The charge is that as a result the project is spending its time simulating the wrong features and distracting attention from the more difficult task of getting a real theoretical understanding; that it is making an electric gland instead of a brain.

Equivocating uploads

brain copyInformation about the brain is not the same as information in the brain; yet in discussions of mind uploading, brain simulation, and mind reading the two are quite often conflated or confused. Equivocating between the two makes the task seem far easier than it really is. Scanners of various kinds exist, after all, and have greatly improved in recent years; technology usually goes on improving over time. If all we need is to get a really good scan of the brain in order to understand it, then surely it can only be a matter of time? Alas, information about the brain is an inadequate proxy for the information in the brain that we really need.

We’re often asked to imagine a scanner that can read off the structural details of a brain down to any required level of detail. Usually we’re to assume this can be done non-destructively, or even without disturbing the content and operation of the brain at all. These are of course unlikely feats, not just beyond existing technology but rather hard to imagine even on the most optimistic view of future developments. Sometimes the ready confidence that this miracle will one day be within our grasp is so poorly justified I find it tempting to think that the belief in the possibility of such magic scans is being buoyed up not by sanguine technological speculation but unconsciously by much older patterns of thinking; that the mind is located in breath, or airy spirits, or some other immaterial substance that can be sucked out of a body and replaced without physical consequences. Of course on the other side it’s perfectly true that lots of things once considered impossible are now routinely achieved.

But even allowing ourselves the most miraculous knowledge of the brain’s structure, so what? We could have an exquisite plan of the structure of a disk or a book without knowing what story it contained. Indeed, it would only take a small degree of inaccuracy or neglect in copying to leave us with a duplicate that strongly resembled the original but actually reproduced none of the information bearing elements; a disk with nothing on it, a book with random ink patterns.

Yeah but, the optimists say; the challenge may be huge, the level of detail required orders of magnitude beyond anything previously attempted, but if we copy something with enough fidelity the information just is going to come along with the copy necessarily. A perfect copy just has to include a perfect copy of the information. Granted, in the case of a book it’s not much use if we have the information but don’t know how to read it. The great thing about simulating a brain, though, is that we don’t even need to understand. We can just set it up and start it going. We may never know directly what the information in the brain was, but it’ll do its job; the mind will upload, the simulation will run.

In the case of mind reading the marvellous flexibility of the mind also offers us a chance to cheat by taking some measurable, controllable brain function and simply using it as a signalling device. It works, up to a point, but it isn’t clear why brain communication by such lashed-up indirect routes is any more telepathy than simply talking to someone; in both cases the actual information in the brain remains inaccessible except through a deliberate signalling procedure.

Now of course a book or a disk is in some important ways actually a far simpler challenge than a brain. The people who designed, made, and use the disk or the book take great care to ensure that a specific, readily readable set of properties encodes the information required in a regular, readable form. These are artefacts designed to carry information, as is a computer. The brain is not artefactual and does not need to be legible. There’s no need for a clear demarcation between information-bearing elements and the rest, and there’s no need for a standardised encoding or intelligible structures. There are, in fact many complex elements that might have a role in holding information.

Suppose we recalibrated our strategy and set out to scan just the information in the brain; what would we target? The first candidate these days is the connectome; the overall pattern of neural connections within the brain. There’s no doubt this kind of research is currently very lively and interesting – see for example this recent study. Current research remains pretty broad brush stuff and it’s not really clear how much detail will ever be manageable; but what if we could map the connections perfectly? How could we read off the content? It’s actually highly unlikely that all the information in the brain is encoded as properties of a network. The functional state of a neuron depends on many things, in particular the receptors and transmitters; the large known repertoire of these has greatly increased in recent years. We know that the brain does not operate simply through electrical transmission, with chemical controls from the endocrine system and elsewhere playing a large and subtle part. It’s not at all unlikely that astrocytes, the non-neural cells in the brain, have a significant role in modulating and even controlling its activity. It’s not at all unlikely, on the other hand, that ephaptic coupling or other small electrical induction effects have a significant role, too. And while myself I wouldn’t place any bets on exotic quantum physics being relevant, as some certainly believe, I think it would be very rash to assume that biology has no further tricks up its sleeve in the shape of important mechanisms we haven’t even noticed yet.

None of that can currently be ruled out of court as irrelevant. A computer has a specified way of working and if electrical interference starts changing the value of some bits in working memory you know it’s a bug, not a feature. In the brain, it could be either; the only way to judge is whether we like the results or not. There’s no reason why astrocyte states, say, can’t be key for one brain region or for one personality, and irrelevant for others, or even legitimate at one time and unhelpful interference at others. We just cannot know what to point our magic scanner at, and it may well be that the whole idea of information recorded in but distinct from a substrate just isn’t appropriate.

Yeah but again, total perfect copy? In principle if we get everything, we get everything, don’t we? The problem is that we can’t have everything. Copying, simulating, or transmitting necessarily involve transitions during which some features are unavoidably left out. Faith in the possibility of a good copy rests on the belief that we can identify a sufficient set of relevant features; so long as those are preserved, we’re good. We’re optimistic that one day we can do a job on the physical properties which is good enough. But that’s just information about the brain.

Mind Uploading: does speed matter?

brainpeelNot according to Keith B. Wiley and Randal A.Koene. They contrast two different approaches to mind uploading: in the slow version neurons or some other tiny component are replaced one by one with a computational equivalent; in the quick, the brain is frozen, scanned, and reproduced in a suitable computational substrate. Many people feel that the slow way is more likely to preserve personal identity across the upload, but Wiley and Koene argue that it makes no difference. Why does the neuron replacement have to be slow? Do we have to wait a day between each neuron switch? hard to see why – why not just do the switch as quickly as feasible? Putting aside practical issues (we have to do that a lot in this discussion), why not throw a single switch that replaces all the neurons in one go? Then if we accept that, how is it different from a destructive scan followed immediately by creation of the computational equivalent (which, if we like, can be exactly the same as the replacement we would have arrived at by the other method)? If we insist on a difference, argue Wiley and Koene, then somewhere along the spectrum of increasing speed there must be a place where preservation of identity switches abruptly to loss of identity; this is quite implausible and there are no reasonable arguments that suggest where this maximum speed should occur.

One argument for the difference comes from non-destructive scanning. Wiley and Koene assume that the scanning process in the rapid transfer is destructive; but if it were not, the original brain would continue on its way, and there would be two versions of the original person. Equally, once the scanning is complete there seems to be no reason why multiple new copies could not be generated. How can identity be preserved if we end up with multiple versions of the original? Wiley and Koene believe that once we venture into this area we need to expand our concept of identity to include the possibility of a single identity splitting, so for them this is not a fatal objection.

Perhaps the problem is not so much the speed in itself as the physical separation? In the fast version the copy is created some distance away from the original whereas in gradual replacement the new version occupies essentially the same space as the original – might it be this physical difference which gives rise to differing intuitions about the two methods? Wiley and Koene argue that even in the case of gradual replacement, there is a physical shift. The replacement neuron cannot occupy exactly the same space as the one it is to replace, at least not at the moment of transfer. The spatial difference may be a matter of microns rather then metres, but here again, why should that make a difference? As with speed, are going to fix on some arbitrary limit where the identity ceases to be preserved, and why should that happen?

I think Wiley and Koene don’t do full justice to the objection here. I don’t think it really rests on physical separation; it implicitly rests on continuity. Wiley and Koene dismiss the idea that a continuous stream of consciousness is required to preserve identity, but it can be defended. It rests on the idea that personal identity resides not in the data or the function in general, but a specific instance in particular. We might say that I as a person am not equivalent to SimPerson V – I am equivalent to a particular game of SimPerson V, played on a particular occasion. If I reproduce that game exactly on another occasion, it isn’t me, it’s a twin.

Now the gradual replacement scenario arguably maintains that kind of identity. The new, artificial neurons enter an ongoing live process and become part of it,  whereas in the scan and create process the brain is necessarily stopped, translated into data, and then recreated. It’s neither the speed nor the physical separation that disrupts the preservation of the identity: it’s the interruption.

Can that be right though – is merely stopping someone enough to disrupt their identity? What if I were literally frozen in some icy accident, so that my brain flat-lined; and then restored and brought back to life. Are we forced to say that the person after the freezing is a twin, not the same? That doesn’t seem right. Perhaps brute physical continuity has some role to play after all; perhaps the fact that when I’m frozen and brought back it’s the same neurons that are firing helps somehow to sustain the identity of the process over the stoppage and preserve my identity?

Or perhaps Wiley and Koene are right after all?

Upon Thy Glimmering Thresholds

TithonusI have been reading about the Brain Preservation Foundation (BPF), which hopes that chemical and other methods, including a refined version of plastination, will enable brains to be preserved with such fidelity that memories, personality, and even identity can be preserved.

This may well seem reminiscent of the older cryogenic preservation projects which have not always had a good press over recent years, though they still continue to operate and indeed have refined their processes somewhat. But although the BPF also has a vision of bringing people back to life after their natural death, it is in many ways a different kettle of fish. It does not itself offer any kind of service but merely seeks to promote research, and it does not expect to see a practical system for many years. In addition, it makes its case and addresses objections in a commendably clear and thoughtful way – see for example this blog post by John M Smart, co-founder of the BPF. Perhaps this is partly also to do with its impressive panel of advisors, which includes such names as Chalmers, Seung, and Eagleman, to mention only a few.

I have some reservations about the project, which fall into several categories; there are general concerns about the practicality of preservation, doubts about personal identity, and doubts about the claimed social value of letting people have a prolonged or renewed life; but there are positive factors, too.

There are clearly a lot of technical issues involved in preserving a brain (often said to  be the most complex object in the universe) in all its detail, most of which I’m not competent to assess.  I think the main general practical issue (if this counts as practical) is that although you might get a quite different impression from the popular press, we still don’t really have a really clear idea of how the brain works – so in preserving it it’s hard to know whether we’re getting the right features. Clearly we would want the neuronal structure preserved in fine detail; but we keep finding out more about such matters as the incredibly complex sets of neurotransmitters that make the system work, about electrical interactions, and about the actual and possible role of astrocytes. If we’re optimistic we may feel we’re close to a working picture, but then we felt like that about genetics until the human genome was sequenced, and it’s now becoming increasingly clear that we didn’t know the half of it. Even without considering whether there might after all be something in the Penrose/Hameroff theory of unknown quantum mechanics operating in microtubules, or in similar ideas from outside the mainstream, there is a lot to think about. Of course the BPF can justly say that it is well aware of these issues , that  they only reinforce the need for more research, and that working on preservation could well be a good way of pushing that research forward.

I think it’s conceivable that there are also problems waiting to be discovered at a deeper level and that the brain can’t even theoretically be ‘frozen’ in working shape – particularly if mental activity turns out to be inherently dynamic. This could happen in a couple of general ways.  First, the brain could be like a zero-gravity box full of bouncing and colliding balls. You can’t halt the activity in such a box and restart at an arbitrary point: you have to start at the point where the balls were thrown in. Second, the brain could be like the old astronomical clocks which were geared together in such a way that they could not be reset; if they ever ran down and stopped, the only way to put them right again was to rebuild them. If either of these issues affects the brain, then it could not be restarted from its state at the last moment of consciousness, but only from some earlier state which might be a few minutes back, a few weeks, or in the worst case, the moment of birth! Now most of us would not mind losing just the last few minutes of our life if it meant we could then live on indefinitely, but even if that’s all it amounts to, recreating that earlier viable state from the later one we had preserved might be extremely difficult – if that’s the game we’re in.

The good news on that is that all the empirical evidence suggests there is no such problem. People have come back from states in which brain activity had apparently run down very close to zero without any major long-term problem: so we can probably afford to be optimistic that a stopped brain is not a dead brain ipso facto.

More of a problem (perhaps) is the BPF’s hope that preserving the brain might preserve personal identity. The philosophical literature on personal identity stretches back many centuries and I seem to remember that my own earlier self had some sophisticated views on it. As an undergraduate I think I developed a kind of morphic neo-nominalist position which dealt with nearly all the issues; but over the years I have become a caveman and my position now is more or less:  see this? this rock rock is rock what your problem? To put it another way I think brute physical identity is essential to personal identity.

I’m aware, of course, that the atoms and molecules of our body are constantly changing – but so what? Why should we think reality resides at the most micro level? Those particles barely have identities themselves; they’re more than half-way towards being mere mathematical constructions. People always say there’s a good chance we’re all breathing the odd molecule of oxygen that was once in the nostrils of Julius Caesar, but how would we know? Can you label a molecule? Can you recognise one? Can you even track where it goes? If I put three on a packing case, can you pick out the one you chose earlier? Isn’t it part of the deal that two protons have identical attributes, apart from their spatial co-ordinates? They’re not so much things as loci. Does talking about the same/different molecule, then, mean anything much? No, reality does not reside exclusively at the molecular level and I rest my case instead on the physical identity of certain neural structures, irrespective of their particle content. We are those critical neurons, I think.

Now you might think that the BPF is off to a flying start with me here, because instead of proposing to upload my mind onto magnetic media, they’re aiming to preserve the echt physical neurons. But I do not think they are optimistic about the prospects of literally restarting the self-same set of neurons: rather they adopt a ‘patternist’ view in which it’s the functional pattern of your mind that carries your identity. I doubt that: for one thing it seems to mean there could be as many of you as there are copies of the pattern. However, the strange thing is, I don’t think the majority of people will actually care: rightly or wrongly they’ll be just as happy with the prospect of a twin – really a kind of hyper-twin, far more like you than any real-life twin – as they would be with their own identity. Hey, they’ll say, I’m not really the same person I used to be ten years ago anyway, any more than you are the same person as that callow young morphic ne0-nominalist. Perhaps when we are in the unprecedented situation of being able to copy ourselves our conception of identity must naturally change and loosen, though as we ontologists say, the idea certainly gives me the willies.

There’s another deep problem in becoming immortal: I might run out. As it is, old people sometimes seem to repeat themselves or get stuck in a groove. Perhaps there comes a natural point when really you’ve said and thought everything important you’re ever going to say and think, and any further lifespan is just going to be increasingly stale repetition. Perhaps the price of acquiring a really fresh lot of mental plasticity is, frankly, being a new person. I have once or twice met older people who, while fit and mentally agile, seemed to feel that the job of their life was basically complete, and while they didn’t specially want to die, there wasn’t really that much detaining them any more, either. It’s a common observation, moreover, that old people sometimes seem to repeat themselves or get stuck in a groove.

I don’t really know how far the idea of people ‘running out’ is true or how far failing memory and appetite for fresh exploration might simply be the product of waning vigour and physical energy, of a kind the BPF might hope to rectify (if rectification is the appropriate term). However, it does seem likely that even if it were true different people would run out at different stages, and probably few of us have completely exhausted our potential by the time we’ve done three score and ten.  George Bernard Shaw took the view that three hundred years would be about the optimum lifespan, and it does feel like a comfortable benchmark: so even if there are natural limits to how far we should go on, it might be good to have the option of another couple of centuries.

That brings us on to the social benefits which the BPF suggests might accrue from having older minds around. They suggest that these older minds would be liberal and enlightened, and a force for progress, [Correction: John M. Smart has very courteously pointed out that the BPF doesn’t suggest this at all. I don’t quite know where I picked up the idea from, but I apologise for the error] which seems to fly in the face of many generations of experience, which is that old people tend to be increasingly conservative. Old scientists whose theories have been refuted don’t usually give them up: they merely die in due course, still protesting that they were right, and make way for a new generation.

If one comes from a culture that reveres certain ancestors, the the prospect of being able, as it were, to bring back Benjamin Franklin to put the Supreme Court right on a couple of points about the Constitution might look pretty appealing; but how many subjects are the oldies going to be experts on?  You may think now that you’re a pretty hip grandpa for understanding Facebook: in fifty years, are you going to have any grasp at all of what’s going on in a society mediated by electronic transactions on systems that haven’t even been conceived of yet?

The good news here is that if the BPF is right, future generations will be able to take the old people out and put them away again as required like library books. Your future life may turn out to be a series of disconnected episodes several hundred years apart. You may find yourself now and then waking up in worlds where your senatorial views on certain matters are valued, but don’t count on having the vote, if such a thing still exists in recognisable form.

All in all, it can’t be bad to reward research, and it certainly can’t be bad to think about the issues, which the BPF also does a good job on, so I wish them plenty of luck and success.

 

Thatter way to consciousness

Picture: Raymond Tallis‘Aping Mankind’ is a large scale attack by Raymond Tallis on two reductive dogmas which he characterises as ‘Neuromania’ and ‘Darwinitis’.  He wishes especially to refute the identification of mind and brain, and as an expert on the neurology of old age, his view of the scientific evidence carries a good deal of weight. He also appears to be a big fan of Parmenides, which suggests a good acquaintance with the philosophical background. It’s a vigorous, useful, and readable contribution to the debate.

Tallis persuasively denounces exaggerated claims made on the basis of brain scans, notably claims to have detected the ‘seat of wisdom’ in the brain.  These experiments, it seems, rely on what are essentially fuzzy and ambiguous pictures arrived at by subtraction in very simple experimental conditions, to provide the basis for claims of a profound and detailed understanding far beyond what they could possibly support. This is no longer such a controversial debunking as it would have been a few years ago, but it’s still useful.

Of course, the fact that some claims to have reduced thought to neuronal activity are wrong does not mean that thought cannot nevertheless turn out to be neuronal activity, but Tallis pushes his scepticism a long way. At times he seems reluctant to concede that there is anything more than a meaningless correlation between the firing of neurons in the brain and the occurence of thoughts in the mind.  He does agree that possession of a working brain is a necessary condition for conscious thought, but he’s not prepared to go much further. Most people, I think, would accept that Wilder Penfield’s classic experiments, in which the stimulation of parts of the brain with an electrode caused an experience of remembered music in the subject, pretty much show that memories are encoded in the brain one way or another; but Tallis does not accept that neurons could constitute memories. For memory you need a history, you need to have formed the memories in the first place, he says: Penfield’s electrode was not creating but merely reactivating memories which already existed.

Tallis seems to start from a kind of Brentanoesque incredulity about the utter incompatibility of the physical and the mental. Some of his arguments have a refreshingly simple (or if you prefer, naive) quality: when we experience yellow, he points out, our nerve impulses are not yellow.  True enough, but then a word need not be printed in yellow ink to encode yellowness either. Tallis quotes Searle offering a dual-aspect explanation: water is H2O, but H2O molecules do not themselves have watery properties: you cannot tell what the back of a house loks like from the front, although it is the same house. In the same way our thoughts can be neural activity without the neurons themselves resembling thoughts. Tallis utterly rejects this: he maintains that to have different aspects requires a conscious observer, so we’re smuggling in the very thing we need to explain.  I think this is an odd argument. If things don’t have different aspects until an observer is present, what determines the aspects they eventually have? If it’s the observer, we seem to slipping towards idealism or solipsism, which I’m sure Tallis would not find congenial. Based on what he says elsewhere, I think Tallis would say the thing determines its own aspects in that it has potential aspects which only get actualised when observed; but in that case didn’t it really sort of have those aspects all along? Tallis seems to be adopting the view that an appearance (say yellowness) can only properly be explained by another thing that already has that same appearance (is yellow). It must be clear that if we take this view we’re never going to get very far with our explanations of yellow or any other appearance.

But I think that’s the weakest point in a sceptical case which is otherwise fairly plausible. Tallis is Brentanoesque in another way in that he emphasises the importance of intentionality – quite rightly, I think. He suggests it has been neglected, which I think is also true, although we must not go overboard: both Searle and Dennett, for example, have published whole books about it. In Tallis’ view the capacity to think explicitly about things is a key unique feature of human mindfulness, and that too may well be correct. I’m less sure about his characterisation of intentionality as an outward arrow. Perception, he says, is usually represented purely in terms of information flowing in, but there is also a corresponding outward flow of intentionality. The rose we’re looking at hits our eye (or rather a beam of light from the rose does so), but we also, as it were, think back at the rose. Is this a useful way of thinking about intentionality? It has the merit of foregrounding it, but I think we’d need a theory of intentionality  in order to judge whether talk of an outward arrow was helpful or confusing, and no fully-developed theory is on offer.

Tallis has a very vivid evocation of a form of the binding problem, the issue of how all our different sensory inputs are brought together in the mind coherently. As normally described, the binding problem seems like lip-synch issues writ large: Tallis focuses instead on the strange fact that consciousness is united and yet composed of many small distinct elements at the same time.  He rightly points out that it’s no good having a theory which merely explains how things are all brought together: if you combine a lot of nerve impulses into one you just mash them. I think the answer may be that we can experience a complex unity because we are complex unities ourselves, but it’s an excellent and thought-provoking exposition.

Tallis’ attack on’ Darwinitis’ takes on Cosmidoobianism, memes and the rest with predictable but entertaining vigour. Again, he presses things quite a long way. It’s one thing to doubt whether every feature of human culture is determined by evolution: Tallis seems to suggest that human culture has no survival value, or at any rate, had none until recently, too recently to account for human development. This is reminiscent of the argument put by Alfred Russel Wallace, Darwin’s co-discoverer of the principle of survival of the fittest: he later said that evolution could not account for human intelligence because a caveman could have lived his life perfectly well with a much less generous helping of it. The problem is that this leaves us needing a further explanation of why we are so brainy and cultured; Wallace, alas, ended up resorting to spiritualism to fill the gap (we can feel confident that Tallis, a notable public champion of disbelief, will never go that way). It seems better to me to draw a clear distinction between the capacity for human culture, which is wholly explicable by evolutionary pressure, and the contents of human culture, which are largely ephemeral, variable, and non-hereditary.

Tallis points out that some sleight of hand with vocabulary is not unknown in this area, in particular the tactic of the transferrred epithet: a word implying full mental activity is used metaphorically – a ‘smart’ bomb is said to be ‘hunting down’ its target – and the important difference is covertly elided. He notes the particular slipperiness of the word ‘information’, something we’ve touched on before.

It is a weakness of Tallis’ position that he has no general alternative theory to offer in place of those he is attacking – consciousness remains a mystery (he sympathises with Colin McGinn’s mysterianism to some degree, incidentally, but reproves him for suggesting that our inability to understand ourselves might be biological). However, he does offer positive views of selfhood and free will, both of which he is concerned to defend. Rather than the brain, he chooses to celebrate the hand as a defining and influential human organ: opposable thumbs allow it to address itself and us: it becomes a proto-tool and this gives us a sense of ourselves as acting on the world in a tool-like manner. In this way we develop a sense of ourselves as a distinct entity and an agent, an existential intuition.  This is OK as far as it goes though it does sound in places like another theory of how we get a mere impression, or dare I say an illusion, of selfhood and agency, the very position Tallis wants to refute. We really need more solid ontological foundations. In response to critics who have pointed to the elephant’s trunk and the squid’s tentacles, Tallis grudgingly concedes that hands alone are not all you need and a human brain does have something to contribute.

Turning to free will, Tallis tackles Libet’s experiments (which seem to show that a decision to move one’s hand is actually made a measurable time before one becomes aware of it). So, he says, the decision to move the hand can be tracked back half a second? Well, that’s nothing: if you like you can track it back days, to when the experimental subject decided to volunteer; moreover, the aim of the subject was not just to move the hand, but also to help that nice Dr Libet, or to forward the cause of science. In this longer context of freely made decisions the precise timing of the RP is of no account.

To be free according to Tallis, an act must be expressive of what the agent is, the agent must seem to be the initiator, and the act must deflect the course of events. If we are inclined to doubt that we can truly deflect the course of events, he again appeals to a wider context: look at the world around us, he says, and who can doubt that collectively we have diverted the course of events pretty substantially?  I don’t think this will convert any determinists. The curious thing is that Tallis seems to be groping for a theory of different levels of description, or well, a dual aspect theory.  I would  have thought dual-aspect theories ought to be quite congenial to Tallis, as they represent a rejection of ‘nothing but’ reductionism in favour of an attempt to give all levels of interpretation parity of esteem, but alas it seems not.

As I say, there is no new theory of consciousness on offer here, but Tallis does review the idea that we might need to revise our basic ideas of how the world is put together in order to accommodate it. He is emphatically against traditional dualism, and he firmly rejects the idea that quantum physics might have the explanation too. Panpsychism may have a certain logic but generate more problems than it solves.  Instead he points again to the importance of intentionality and the need for a new view that incorporates it: in the end ‘Thatter’, his word for the indexical, intentional quality of the mental world, may be as important as matter.