Archive for December, 2006

Mystery gift John Stewart has written an interesting paper on the future of consciousness – where will evolution take it next? His discussion is strongly rooted in the Global Workspace theory, but even if you don’t subscribe to that school of thought it makes good sense.The prime function of consciousness, he says, is to give us new and ultimately better adaptive responses. In his view, it is the Global Workspace that allows the human mind to bring together resources which were not previously linked, but which can be used together in new and effective ways. This process becomes especially powerful when, in human beings, it gives rise to declarative knowledge – explicit, conscious thought. It’s this ability that makes human beings able to frame complex and long-term plans, a huge benefit so far as survival is concerned.

But the use of declarative reasoning, thus far, is limited: in particular, while humans use it to find means of achieving their goals, they don’t make very effective use of it in determining their goals in the first place: moreover, where they do think explicitly about their long-term objectives, they are very vulnerable to interference from hedonic impulses. In simpler language, the goals which you have chosen for good reasons of principle tend to be undermined by the pursuit of short-term pleasures thrown up by the more primitive parts of the mind. Passion ends up dominating reason.

This has a very familiar ring to it: for thousands of years philosophers and religious leaders have been enjoining us to raise our minds above ephemeral pleasures and seek more lofty goals. Stewart is happy to make the connection with religious thought, and suggests that certain religious and contemplative traditions have developed techniques which we could use to help us develop our consciousness to a new level.

Interesting stuff, but there are a few points which seem dubious. In the first place, what he seems to be talking about is self-improvement rather than evolution: nothing we learn during our lifetime is transmitted with our genes. I suppose it might be the case that if enlightened thinking became a skill we all had to learn for our own good, those who had genes which especially fitted them for it might acquire an advantage and so the gene pool might move in that direction.

Second, does the kind of detached thinking he mentions really have survival value? I don’t think Buddhist monks take up meditation in order to give their genes a better chance, and in fact some religious traditions have definite leanings towards celibacy and the sacrifice of one’s own reproductive prospects. Stewart seems to think that if choose our goals rationally, we will choose ones that best serve our survival: but there’s no purely rational reason to choose one goal rather than another – that’s why we have built-in drives to begin with. It seems quite likely to me that a meditative, rational human race, operating on an unemotional level, might easily decide not to reproduce – or perhaps even not to eat. It may be that we need to stay stupid enough for our own good.

The question of where consciousness might go is an interesting one, though. What mystery gift might Mother Nature have in store for us? There is of course some debate about whether human beings have escaped from evolutionary pressures for the foreseeable future, either by controlling their own environment, or perhaps just by moving around so much that there are none of the isolated populations which seem to play an important part in speciation.

Since it’s nearly Christmas, I propose to address the issue on an entirely frivolous level and present a Christmas wish-list of my top ten favourite improvements to consciousness…

Coming in at number 10 is Distributed Processing. Why can’t we bud off a separate train of thought to deal with some subsidiary problem while we go on thinking about something else? We could have a whole tapestry of different threads separating and re-converging.

At number 9 I’d like an Indexed Memory. I don’t need anything complex here – just a date-based system will do, so that I can remember what happened at any specified time and date.

Number 8 is a Vivid Memory. I’d like to be able to run personal flashbacks the way people in films do, where you relive the events being recalled in realistic detail.

At 7, I want Control of My Dreams. Not that I have terrible nightmares, you understand – I rarely recall any dreams at all. Some people apparently do have some control of some of their dreams, but I’d like to be able to script them completely. Purely for research purposes, of course. Well, partly for research purposes.

Number 6 is the ability to enter Altered States of Consciousness at will, without having to do any of that starving, meditating, or consumption of ayahuasca. If this is too much, I’d settle for a degree of control over my own endorphins.

At 5, I’ve got (or rather, I haven’t yet got) Control of the Attention; the ability to concentrate on things, or indeed, ignore them.

Number 4 is Automation. There are many things I have to do laboriously in the forefront of my consciousness – calculations, writing routine letters, that sort of thing – which I wish I could delegate to unconscious functions which I’m sure are perfectly up to the job.

At 3, I want, ahem, Control of My Emotions. I don’t actually want to be Mr Spock, but when someone treads on my foot I should like to be able stop being annoyed about it immediately instead of an hour and a half later.

Number 2 is Control of Unconscious Functions. I’d like to able to summon up placebo effects without being put through a confidence trick first; to stop the body’s exaggerated and unhelpful response to burns and some other injuries, and rev my own heart up a bit when I know it’s going to be needed but my unconscious mind doesn’t.

At number 1, (since this is an intellectually oriented site), is Transparency, or Being Able To See How It Works. You cannot observe yourself in the process of composing your own thoughts, no matter how quickly you look over your shoulder. But wouldn’t it be good if you could?

Berkeley Most of those who have views about consciousness seem to be monist materialists. Indeed, the essential problem of consciousness is often formulated as being how we reconcile it with materialist physics, without any expectation of anyone’s asking why we should want to. An embattled minority still rally round the dualist flag, but that ‘s more or less it so far as popular metaphysical options are concerned.

But popularity isn’t everything, and there is of course another position with an impeccable philosophical pedigree in the shape of idealism, the view most famously propounded by Bishop Berkeley with the maxim ‘to be is to be perceived’. According to Berkeley things only exist because they feature in some mind: we are saved from a capricious dream-world of our own only by the mind of God, which encompasses and sustains everything, guaranteeing the consistency of the world and keeping things going when no human being is perceiving them. Axel Randrup and Peter B Lloyd, in different ways, have taken this same path, concluding that the solution to the problem of consciousness is easier if we assume that the mental world is the real one and the physical world a construction or fiction arising out of it.

What could motivate such a stance? I suspect that both Randrup and Lloyd have other reasons than consciousness for their idealist views, but both think it solves problems in that area which would otherwise pose formidable difficulties. Randrup sees a contradiction in the normal materialist view: it requires us to think that all of our mental life arose out of the evolution of simple matter, yet at the same time insists that that matter is wholly independent and separate from the mental stuff of consciousness. Lloyd thinks that the ‘hard problem’ becomes easy on an idealist view: the problem is about reconciling our experiences with the physical world, but if there is no physical world then we’re home and dry.

Although I can see the appeal of these views, I don’t think either argument is really convincing. It’s not strictly contradictory to say that the mental arises from the physical and then reflects the physical. There is, indeed, a tension involved in having both mental and physical stuffs in your account, but that can be resolved as easily by materialism as idealism, or, a little less easily, by a plausible dualist system.

I don’t think the hard problem is really a matter of reconciling our experience with the physical world so much as accounting for our phenomenal experience in any way whatever. It’s as though we were examining a distant building in foggy conditions: we can see an elaborate roof high up which appears to be floating in space: we can also see the lower part of some columns near the ground, but nothing else is visible. Unfortunately the roof seems to be a completely different size and shape from the building suggested by the columns, and not even aligned with it. Well, says Lloyd, it’s OK because the columns you can see are really just a kind of illusion: actually there’s nothing there at all. That solves the consistency problem, but what we really wanted to know was what’s holding the roof up: and that problem is at least as bad as before.

However, idealism does have some things going for it. Randrup, rightly I think, rejects the argument that idealism leads automatically to solipsism. It’s true that if all we have to go on is our own perceptions, we could achieve a nice bit of ontological parsimony by denying that anyone else’s perceptions exist. But materialists are not immune to that argument, since even they would generally accept that our experiences are the only evidence we have about the nature of external reality. They are obliged to justify their belief in the outside world by quoting the remarkable consistency and saliency of some of the hypothetical real-world entities which we take to be behind our perceptions: but idealists can take a similar line to support their belief in other people and even physical entities, without committing themselves to the independent reality of all these entities.

This does leave the idealists needing an explanation for the consistency of our experiences other than their having a source in an independent real world. For Randrup, this is a matter of intersubjectivity, a sharing of perceptions by groups of minds: for Lloyd, a more orthodox Berkeleyan, it arises from the metamind of which we are all parts, and which we can and perhaps should, call God.

Randrup is keen to explain that idealism does not mean we have to abandon all our science and just begin treating the world as being more or less an arbitrary dream: our theories need some reinterpretation but they remain valid. The trouble is, I think, that once the real causality of the world has been relocated elsewhere, the scientific story seems to be redundant. If the world were being spun out the imagination of a communal mind or metamind, why would it bother with all the details of chemistry and sub-atomic physics (though some aspects of modern physics might well be taken to resemble an imagined story whose inconsistencies had gradually got out of control)? Why would it stage a long history of evolution, and why would it take so long over working out something which it had thought up for itself?

Moreover, where are we left with consciousness? Whether you like it or not, one of the great attractions of computationalism, and to a lesser extent other materialist explanations is that they seem to offer a reasonably plausible and quite detailed explanation of what is going on – a real reduction of mentality to something easier to cope with. If idealism is true, we more or less have to accept the mental life as an unanalysed given, which is rather unsatisfying. In fairness, I think both Randrup and Lloyd would have at least some things to say by way of explaining consciousness, while none of the materialist theories on offer is anywhere near complete and satisfactory. But if we have to assess which of the two accounts looks more fully developed and offers more in the way of partial explanations, I don’t think there’s much contest.