Archive for August, 2006


‘No,’ said he, ‘nor none
Of all these spirits, but myself alone,
Knows anything till he shall taste the blood.
But whomsoever you shall do that good,
He will the truth of all you wish unfold;
Who you envy it to will all withhold.’

Views of the afterlife seem to me to have become markedly more optimistic since ancient times; one of the most depressing versions must surely be the one in the Odyssey, where the dead prove to be mere shadows, devoid of all intelligence until they are provided with revivifying blood.

The idea of blood as the animating feature of the mind has acquired a modern echo of sorts in the theory put forward by Kenneth J. Dillon, namely that red blood cells provide the basic mechanism for a magnetoreceptive system that many animals possess to some degree: in human beings its clarity as a sense is somewhat clouded over, but it plays a vital role in the generation of consciousness.

I ought to admit to begin with that my knowledge of the theory is based purely on this extract from a longer and wide-ranging book, so I may well have missed some of the background: Dillon treats the existence of a magnetoreceptive sense in a wide variety of species, and of a human ability to perceive light with the skin, as established facts which require some explanation, whereas I wasn’t aware that either was particularly well evidenced. However, the idea of thinking, or at least sensing, though the agency of the blood, has some appeal.

It’s probably true that the role of blood in general has been unduly neglected in cognitive science: we know quite well that our emotions and patterns of thought are strongly influenced by hormones and other substances in the bloodstream, but with some honourable exceptions, the fact rarely gets much of a mention in discussions of consciousness. Magnetoreception is a slightly different matter, but after all, the view that phenomenal consciousness arises from an electromagnetic field, while not exactly mainstream, has been put forward by more than one author; if it is so, isn’t it plausible that all those iron-rich red blood cells sweeping round our body have some role in the em field? The brain engrosses a disproportionate share of the body’s blood supply – perhaps there’s another reason for that, and it isn’t just a matter of being a greedy energy consumer.

Dillon suggests that his theory might help to explain blindsight, the phenomenon in which people who are blind so far as conscious vision is concerned, can nevertheless ‘guess’ correctly the location of things they apparently can’t see. Clearly if one had a misty magnetoreceptive sense, it might be able to fill in where your eyes failed you. But blindsight, so far as I know, only occurs in cases of certain kinds of specific, limited damage. A magnetoreceptive sense ought to work irrespective of the visual system, but there’s no blindsight in cases where the eyes are destroyed, and I don’t think it would work even for blindsight patients if they had their eyes covered or closed (though so far as I know, that variation on the experiment has not been carried out). Of course, one of the leading characteristics of blindsight is that it isn’t conscious, so if the red blood cells are supplying it that must presumably be distinct from any direct role they may play in consciousness.

Dillon also thinks that his theory might help with the binding problem, but I’m not convinced about this either. The binding problem is the issue of how the data from different senses gets combined into a smoothly-running, uninterrupted and fully co-ordinated view of reality. The brain never gives us faulty lip-sync or sudden jumps and pauses in our view of events: but exactly how it pulls off this feat is unknown (some would argue that it doesn’t have to, because the problem is misconceived). Now people often suggest, and I suspect this is what Dillon has in mind, various means by which the impressions from different senses could be brought together: but just bringing them together isn’t enough: we need a method of working out what noises and what sights should be associated with the same instants, and of patching them together on the fly in a smooth sequence (and doing it all more or less instantaneously, because a lag of any appreciable size between reality and our view of it would clearly have a significant negative survival value!)

I also see some prima facie problems with the red blood cell theory. If the theory is true, oughtn’t we to be much more aware of magnetic fields in our environment? So far as I know the human body is largely unresponsive to magnets. If we have any kind of magnetoreceptive apparatus, wouldn’t we be sensitive in some way to electric motors, cathode ray tubes, and anything else with a significant magnetic component? Goodness knows what would happen to people undergoing an FMRI scan, since they are likely to be exposed to really whopping magnetic fields for an hour or more. But that point cuts both ways because, of course, FMRI would not be possible if blood didn’t have significant magnetic properties which other tissues lack.

I also wonder why a magnetoreceptive facility would have remained so much in the background? You would think it would be a rather useful ability for an animal to have. It seems particulalry strange that human beings should have it, or have had it, but allow it to remain almost completely dormant. Dillon suggests a number of possible reasons. We might need to be in a more trance-like state for it to work (an odd situation – we need to be less conscious in order to use the system which supports consciousness. Perhaps the role of supporting human-style consciousness is a new adaptation which has spoiled the sensory function.) Or our upright posture, habit of wearing clothes, or tendency to surround ourselves with artificial magnetic fields (ah, so it does have some effect!) has messed things up.

I’m no biologist, but I suspect that magnetoreceptive sensing isn’t common because it doesn’t work as well as some of the more usual senses. The creatures that indisputably have an electrical or magnetic sensory apparatus tend (I think) to need it because they live in murky conditions where vision is no good, or possibly because they want it to sense the earth’s magnetic field as an aid to navigation. Human beings, living in well-lit terrestrial conditions, and not being migratory, wouldn’t actually have much use for such a sense. In fact, I remain sceptical about their ever having had it to any noticeable degree.

I’m not convinced then – but you have to give Dillon credit for boldness and originality, qualities we’re surely going to need before the mystery of consciousness is dispelled.

BayesThe Max Planck Institute has set up a new project called the Bayesian Approach to Cognitive Systems, or BACS . Claiming to be inspired by the way real biological brains work, the project aims to create new artificial intelligence systems which can deal with high levels of real-world complexity; navigating through cluttered environments, tracking the fleeting expressions on a human face, and so on. There’s no lack of confidence here; the project’s proponents say they expect it to have a substantial impact on the economy and on society at large as the old dream of effective autonomous robots finally comes true.

It seems remarkable that the work of an eighteenth-century minister should now become such hot stuff technologically. Thomas Bayes is a rather mysterious figure, who published no scientific papers during his lifetime, yet somehow managed to become a member of the Royal Society: he must, I suppose, have been a great talker, or letter-writer. The theorem on which his fame rests, published only after his death, provides a way of calculating conditional probabilities. It gives a rational way of arriving at a probability based purely on what we know, rather than treating probability as an objective feature of the world which we can only assess correctly when we have all the data. If I want to know the odds of my taking out a green sock from my drawer, for example, I would normally want to know what socks were in there to begin with, but Bayes allows me to quantify the probability of green just on the basis of the half-dozen socks I have already pulled out. Philosophically, Bayes is naturally associated with the subjectivist point of view, which says that all probability is really just a matter of our limited knowledge – though given the lack of documentary evidence we can really only guess what his philosophical views might have been.

Universally accepted to begin with, then somewhat neglected, his ideas have been taken up in practical ways in recent years, and are now used by Google, for example.

Why are they so useful in this context? I think one reason is that they might offer a way round the notorious frame problem. This is one of the big barriers to progress with AI: the need to keep updating all your knowledge about the world every time anything changes. The problem as originally conceived was particularly bad because in addition to noting what had changed, you had to generate ‘no change’ statements for all the other items of knowledge about the world in your database. Daniel Dennett and others have reinterpreted this into a much wider philosophical problem about coping with real world knowledge.

Is there a solution? Well, the main reason the frame problem is so bad is because systems which rely on classical logic, on propositional and predicate calculus, cannot tolerate contradictions. They require every proposition in the database to be labelled either true or false: when a new proposition comes up, you have to test out the implications for all the existing ones in order to avoid a contradiction arising. It’s clearly a bad thing to have conflicting propositions in the database, but the problem is made much worse by the fact that in classical logic there is a valid inference from a contradiction to anything (one of the counterintuitive aspects of classical logic): this means any contradiction means complete disaster, with the system authorised to draw any conclusions at all. If you could have a different system, one that tolerated contradictions without falling apart, the problem would be circumvented, and that is why McCarthy, even as he was describing the frame problem for the first time, foresaw that the solution might lie in non-monotonic logics; that is, ones which don’t require that everything be either true or false.

Where can we find a good non-monotonic system? It’s not too difficult to extend normal logic to include a third value, neither true nor false, which we might call uncertain, or neutral: many would say that this is a more realistic model of reality. The snag with a trivalent logical system is that it sacrifices one of the main tactics available, namely that of deducing the falsity of a proposition from the fact that it leads to a contradiction. If there’s always a third possibility, contradictions are hard to derive, and as a result trivalent logics are much less powerful tools for drawing new conclusions than classical logic (which isn’t exactly a powerhouse of new insights itself).

Step forward, Bayesian methods. Now we can allocate a whole range of values to our propositions, representing the likelihood or the level of credence we assign to each. We don’t need to re-evaluate everything when a new piece of information comes up, because head-on contradictions no longer arise; and we’re no longer dealing in formal deductions anyway – instead we can use each new piece of evidence to make a rational adjustment in values. We don’t get an instant conclusion, but what may be better: a gradual refinement of our beliefs whose ratings get more accurate the longer we go on. We can start without much information at all, and still draw reasonable conclusions.

This sounds pretty good, but in addition Bayesian methods are well established in the field of neural networks and there’s some reason to think that this might be one of the ways real human brains work, especially in the case of perception. Rather than performing computations on visual data, it might well be that our brains use a Bayesian encoding, representing, say, the probability that we’re seeing a straight edge at a certain distance from us, and using new data to update the relevant values.

This all seems excitingly plausible so far as basic cognitive functions are concerned, but what about consciousness itself? It seems to me quite a reasonable hypothesis that our opinions and beliefs are held in a Bayesian kind of way – mostly with varying degrees of certainty, and with a degree of tolerance for inconsistency. Changes in what we believe about the world do generally seem to arrive as the result of a relatively gradual accumulation of evidence, rather than through a sudden deductive insight.

But what about our phenomenal experience, the famous ‘hard problem’? Here, as so often before, we seem to run aground a bit. I would have expected Bayesian perceptions to give us a rather cloudy, probabilistic view of the world, but instead we have a pretty clear and distinct one. What colour is the rose?

“Well, I’d say it looks 85% red, but it also looks 5% pink but in a poor light, and 4.5% orange. There’s actually a distinct possibility of picture or model about the rose itself, and I can see an outside chance of hologram.”

It very much isn’t like that, or so it seems to me: our senses present us with a pretty unambiguous world: if there are mists, they are generally external. In all the cases of ambiguous images I can think of, the brain enforces one interpretation at a time: you may be able to switch between seeing the shape as convex or concave, say, but you can’t see it as evens either way.

How much does that matter? We don’t have to solve all the problems of cognitive science at once, and however confident it may be, I don’t think the Max Planck Institute is attempting to. But I wonder why we are given this definite view of the world if the underlying mechanisms deal only in varying levels of probability? It can’t be that way for no reason: we can only assume that there was some survival value in our brains working that way. It might be that this sharp focus is somehow a side-effect of full-blown consciousness, though I can’t imagine why that should be; it might merely be that our perceptual systems are so good there’s generally no point in confusing us with anything but the top probability.

But I think this might be a clue that for reasons which remain obscure, Bayesian methods alone will turn out to be not quite enough, even for some fairly basic examples of cognition. I won’t be standing in the path of any of the early model robots, anyway.