Testable quantum effects in the brain?

Picture: Binocular rivalry. The idea that quantum mechanics is part of the explanation of consciousness, in one way or another, is supported by more than one school of thought, some of which have been covered here in the past. Recently MLU was quick to pick up on a new development in the shape of a paper by Efstratios Manousakis which claims that testable predictions based on the application of quantum theory have been borne out by experiment.

The claims made are relatively modest compared to those of Penrose and Hameroff, say. Manousakis is not attempting to explain the fundamental nature of conscious thought, and the quantum theory he invokes isn’t new. At its baldest, his paper merely suggests that dominance duration of states in binocular rivalry can be accurately modelled by assuming we’re dealing with a small quantum mechanical system embedded in a classical brain – although if that much were true, it would certainly raise further issues.

But let’s take it one step at a time. What is binocular rivalry, to begin with? When radically different images are presented to the left and right eye, instead of a blurry mixture of both images, we generally see one fairly clearly (occasionally a composite made of bits of both images); but curiously enough, the perceived image tends to switch from one to the other at apparently random intervals. Although this switching process can be influenced consciously, it happens spontaneously and therefore seems to be a kind of indecisiveness in some unconscious mechanism in our visual system.

Manousakis proposes a state of potential consciousness, with the two possible perceptions hanging in limbo. When the relevant wave function is collapsed through the intervention of another, purely classical brain mechanism, one or other of the appropriate neural correlates of consciousness is actualised and the view through one eye or the other enters actual consciousness. This model clearly requires perception to operate on two levels; one, a system that generates the potential consciousness, and two, another which actualises states of consciousness by checking up every now and then.

Thus far we have no particular reason to believe that the application of quantum concepts is anything more than a rather recondite speculation; but Manousakis has shown that the persistence of one state in the binocular rivalry, and the frequency of switching, can be predicted on the basis of his model: moreover, it explains and accurately predicts another observed phenomenon, namely that if the stimuli in a binocular rivalry experiment are removed and returned at intervals, the frequency of switching is significantly reduced. If I’ve understood it correctly (and I’m not quite sure I have), one of the two images tends to stay around because when you collapse the wave function a second time, there is a high probability of it collapsing the same way. If you remove the images for a while, no new collapses can occur until the images return. It’s as though you were throwing a dice and moving to the other state when you threw a six; if you keep throwing, the changes will happen often, but if you take the dice away for a few minutes between each throw, the changes will become less frequent.

Manousakis has also demonstrated that his theory is consistent with the changes observed in subjects who had taken LSD (a slightly puzzling choice of experiment – I’m slightly surprised it’s even legal – but it seems it reflects established earlier findings in the field).

So – a breakthrough? Possibly, but I see some issues which need clearing up. First, to nail the case down, we need better reasons to think that no mere classical explanation could account for the observations. There might yet be easier ways to account for changes in dominance duration. We also need some explanation of why quantum mechanics only seem to apply in unusual cases of ambiguity like binocular rivalry. A traditional theorist who attributes binocular rivalry to problems with the brain’s interpretative systems has no trouble explaining why the odder effects only occur when we impose special interpretive challenges by setting up unusual conditions: but if a quantum mechanical system is doing the job Manousakis proposes, I would have thought occasional quantum jumps would have been noticeable in ordinary perception, too.

It would help, in addition, to have a clearer idea of how and why quantum mechanics is supposed to apply here. It could presumably be that potential consciousness is generated at a microscopic level where quantum effects would naturally be observable: an account of how that works would be helpful – are we back with Penrosian microtubules? However, Manousakis seems to leave open the possibility that we’re merely dealing with an analogy here, and that the maths he employs just happens to work for both good old-fashioned quantum effects and for a subtle mental mechanism whose basic nature is classical. That would be interesting in a different way.

It will be interesting, in any case, to see whether anyone else picks up on these results.

Philosophy and Neuroscience

Picture: Philosophy and Neuroscience. I’ve just caught up a bit belatedly with The Philosophy and Neuroscience Movement, (pdf) the paper of Pete Mandik’s, written with Andrew Brook, which he featured on his blog at the beginning of the month. It’s an interesting read and seems to have garnered quite a bit of attention, including a discussion (rather foggy, I thought) on Metafilter.

The general question of relations between science and philosophy has of course been a vexed one ever since the distinction started to be made (I don’t think someone like Isaac Newton would have seen any particular gulf between the two). Some scientists speak of philosophers the way an aristocrat might talk about the gypsies encamped on the edge of his land: with patronising disapproval, but also with a slight secret fear that they might have obscure magic secrets which could ruin the scientific harvest: philosophers for their part have been largely scared away from the grand metaphysical uplands by the obvious dangers in ontologising without a firm grasp of modern physics.

When it comes to philosophy of mind and neuroscience the intertwining of the issues creates an especially great need for co-operation and interaction, and perhaps holds out some prospect of more of a meeting on equal terms. Some thinking along these lines must, at any rate, have lain behind the movement for working together which Brook and Mandik say began about twenty-five years ago; they rightly point out that the influence of scientific advances has transformed the way we think about language, memory and vision (in fact scientific understanding of the visual system has been influencing philosophical ideas for hundreds of years: the discovery of images on the retina must surely have predisposed philosophers towards thinking in terms of an homunculus, a ‘little man’ sitting and watching the pictures, and perhaps still adds some weight to the perceived importance of mental images as key intermediaries between us and reality.

But I think it’s still not uncommon for people on either side to assume that they can get on quite well on their own. Neuroscientists may be tempted to think they should just get on with the science, and let the philosophy take care of itself: maybe the philosophical answers will come along a s a kind of bonus with the scientific results – and if they don’t, well philosophers never answer anything anyway, do they? Philosophers, equally, may suppose the science is a matter of detail – oh, by the way, they tell me that the firing of c-fibres is not actually equivalent to pain, as it turns out, but it doesn’t really matter if it’s, you know, f-fibres or z-fibres: for the sake of brevity in this discussion let’s just pretend it is c-fibres…

Hence in part, I suppose, the paper, which discusses (necessarily in brief and summary form) a number of interesting areas of interaction which show how much there is to be gained by positive engagement.

An interesting one is the suggestion that neuroscience and philosophers of neuroscience have greatly strengthened the case against nomological (law-based) theories of scientific explanation. Physics tends to be taken as the paradigmatic science, but one of its leading characteristics is its amenability to nomological treatment. Physics produces clear universal laws which work with precise mathematical accuracy. In biology,things are very different, and we tend to have to work with explanations which are teleological (concerned with the purpose of things) or statistical in nature. This appears to be especially true in neuroscience, where the brain exhibits complex structure but does not seem to operate under simple laws.

Another area I found thought-provoking was the claim, which I found surprising at first, that new neuroscientific tools have led to a renaissance in introspective studies. Introspection, the direct inward examination of the contents of one’s own consciousness, was a no-go area for many years after the collapse of attempts to systematise introspectionist findings (indeed, we’ve been reading recently in the JCS about the disgust with introspectionism that led J.B. Watson to set up as a radical behaviourist, declaring that there actually were no damn contents of consciousness). The trouble with introspection has always been that the results are chaotic and unconfirmable: training designed to improve results raises the new danger of bias and getting out of your subjects only what you trained into them. How could this situation possibly be reclaimed?

Yet it’s true when you come to think of it that many of the numerous studies with fMRI and other scanners which have taken place in recent years have relied on introspective reports. The thing is, of course, the scanners provide an avenue of confirmation and corroboration which Wundt and Titchener never had and legitimise introspective reports. If it weren’t already so securely fastened, this would be another nail in the coffin of radical behaviourism.

A third point among many which particularly struck me was the issue of neural semantics. Consideration of the brain and nervous system as information processors takes us into the philosophically intractable area of intentionality. The real difficulties (and the most interesting issues) here are well-known to philosophers but easily overlooked or undervalued by scientists. However, neurobiology offers influential examples of feature-detection mechanisms which might (or might not) point the way to a proper analysis of meaning. My personal view is that this is a particularly promising area for future development, where ancient mysteries might really be dispelled in part by future research.

The paper concludes with a rather downbeat look at consciousness itself. Faced with claims that consciousness is in part not neural, or even physical, many neuroscientists (and their ‘philosophical fellow-travellers’) ignore them or ‘throw science at it’, the authors say. They rightly consider this a risky approach, liable to lead to the familiar syndrome in which the scientists explains a toy-town or simplified conception of consciousness, leaving the really tough problem breathing unacknowledged down their necks. This might be a slightly gloomy view, but it is impossible to disagree when the authors call for better rejoinders to such claims.

Anthropomorphism

Picture: Rupert and Trepur. I came across A defense of Anthropomorphism: comparing Coetzee and Gowdy by Onno Oerlemans the other day. The main drift of the paper is literary: it compares the realistic approach to dog sentiment in J.M.Coetzee’s Disgrace with the strongly anthropomorphic elephants in Barbara Gowdy’s The White Bone. But it begins with a wide-ranging survey of anthropomorphism, the attribution of human-like qualities to entities that don’t actually have them. It mentions that the origin of the term has to do with representing God as human (a downgrade, unlike other cases of anthropomorphism), notes Darwin’s willingness to attribute similar emotions to humans and animals, and summarises Derrida’s essay The Animal That Therefore I Am (More To Follow). The Derrida piece discusses the embarrassment a human being may feel about being naked in front of a pet cat (I didn’t know that the popular Ceiling Cat internet meme had such serious intellectual roots) and concludes that taking the consciousness of animals as seriously as our own threatens one of the fundamental distinctions installed in the foundations of our conception of the world.

That may be, but the attribution of human sentience to animals is rife in popular culture, especially when it comes to children. Some lists of banned books suggest that the Chinese government has cracked down on many apparently harmless children’s books; it turns out this is because at one time the Chinese decided to eliminate anthropomorphism from children’s literature, wiping out a large swathe of traditional and Western stories. I can’t help feeling a small degree of sympathy with this: a look at children’s television reveals so many characters who are either outright talking animals, or (even stranger) humanoids with animal heads, you might well conclude there was some law against the depiction of human beings. It would surely seem odd to any aliens who might be watching that we were so obsessed with the fantasised doings of other species.

Or perhaps it wouldn’t seem strange at all, and they would merely make plans for a picture-book series about Hubert the Human, his body spherical with twelve tentacles just like a normal person, but his head displaying the strange features of Homo Sapiens. It seems likely that our fondness for anthropomorphism has something to do with our marked tendency to see faces in random patterns: our brains are clearly set up with a strong prejudice towards recognising people, or sentience at least, even where it doesn’t really exist. Such a strong tendency must surely have evolved because of a high survival value – it seems plausible that erring on the side of caution when spotting potential enemies or predators, for example, might be a good strategy – and if that’s the case we might expect any aliens to have evolved a similar bias.

That bias is a problem for us when it comes to science, however. When considering animal behaviour, it seems natural, almost unavoidable, to assume that the same kind of feelings, intentions and plans are at work as those responsible for similar behaviour in humans. After all, humans are animals. It’s clear that other animals don’t make such complicated plans as we do; they don’t talk in the same way we do and don’t seem to have the same kinds of abstract thought. But some of them seem to have at least the beginnings or precursors of human-style consciousness.

Unfortunately, careful observation shows beyond doubt that some forms of animal behaviour which seemed purposeful are really just fantastically well-developed instincts. The sphex wasp seems to check its burrow out of forethought before dragging its victim inside; but if you move the victim slightly and make it start the pattern again, it will check the burrow again, and go on doing so tirelessly over and over, in spite of the fact that it knows, or should know, that nothing could possibly have gone into the burrow since the last check.

A parsimonious approach seems called for. The methodologically correct principle to apply was eventually crystallised in Morgan’s Canon:

‘In no case may we interpret an action as the outcome of the exercise of a higher mental faculty, if it can be interpreted as the exercise of one which stands lower in the psychological scale’

In effect, this principle sets up a strong barrier against anthropomorphism: we may only attribute human-style conscious thought to an animal if nothing else – no combination of instinct, training, environment and luck – can possibly account for its behaviour. I said this was ‘methodologically correct’, but in fact it is a very strong restriction, and it could be argued that if it were rigorously applied, the attribution of human-style cognition to certain humans might begin to look doubtful. According to Oerlemans, ethologists have been asking whether, by striving too hard to avoid anthropomorphism, we haven’t sometimes denied ourselves legitimate and valuable insights.

It’s interesting to reflect that in another part of the forest we are faced with a similar difficulty and have adopted different principles. Besides the question of when to attribute intelligence to animals, we have the question of when to do so for machines. The nearest thing we have to Morgan’s Canon here is the Turing Test, which says that if something seems like a conscious intelligence after ten minutes or so of conversation, we might as well assume that that’s what it is. Now as it happens, because of its linguistic bias, the Turing Test would not admit any animal species to the human level of consciousness; but it does seem to be a less demanding criterion. Perhaps this is because of the differing history in the two fields: we’ve always been surrounded by animals whose behaviour was intelligent in some degree, and perhaps need to rein in our optimism; whereas there were few machines until the nineteenth century, and the conviction that they could in principle be intelligent in any sense took time to gain acceptance – so a more encouraging test seems right.

Perhaps, if some future genius comes up with the definitive test for consciousness, it will lie somewhere between Morgan and Turing, and be equally applicable to animals and machines?

Reflexive Monism

Picture: Cat diagram.

Max Velmans has produced Reflexive Monism as a valiantly renewed effort to sort out the confused story of the relations between observer, object, and experience. This is one of those subjects that really ought to be perfectly straightforward but in fact has descended into such a dense thicket of philosophical clarification that the scope for misunderstanding and talking past one another is huge. In my more pessimistic moments I wonder whether the issue is really reclaimable at all, at least by means of further discussion.

Velmans’ view apparently stems from a revelation he experienced when he noticed that the world he had heretofore regarded as the public, objective, external one – the world we all experience, full of cats and a number of other things – was in fact a phenomenal world, a world as experienced by him. Those things out there are our experiences of the world, and so reflexive monism belongs with the externalist theories that seem to have become popular recently. It is also a dual aspect theory; that is, the one underlying stuff in which all monist s must believe expresses itself in two ways, as the objective physical world and as the consciously experienced phenomenal world we actually see out there.

Dual aspect theories seem attractively sensible, and very probably true as far as they go; but I think one can be pardoned for still feeling slightly unsatisfied by them. Okay, so the world doesn’t consist of two kinds of stuff, it just has two aspects; but to round that out into a proper explanation we need an account of what an aspect might be. Ideally we also want an account of the nature of the one underlying stuff which would explain why on earth it expresses itself in two different ways. These accounts are not easy to give: in fact it is quite difficult to say anything at all about the fundamental stuff, much as all good metaphysicians must wish to do so. I don’t think these issues are quite so problematic for the poor benighted dualists, who have a good reason why things might appear in two different guises, or the more brutal kinds of monist, who can as it were, just tick all the ‘No’ boxes on the form. Velmans, in fairness, has given us a helpful hint in the name of his theory: the reflexivity of his monism refers to the idea of the Universe becoming aware of itself through the medium of conscious individuals, which at least suggests where the two aspects might spring from.

Velmans brings out well what I think is the main attraction of externalism: that it eliminates the idea that the objects of perception are entirely in the head, that all we ever really experience are representations in the brain. Some of Velmans’ assaults on this idea, however, are a little dubious. Take his ‘skull’ argument: the phenomenal world extends as far as we can see – to the horizon and the dome of the sky, he suggests. Now if the phenomenal world is all inside my brain, my real, non-phenomenal skull exists beyond that world: beyond the dome of the sky. How ridiculous is that? Rhetorically, this conjures up the idea of the real skull floating in outer space, or perhaps enclosing the world like a second, bony sky. But really, in saying that the real skull is beyond the phenomenal world, we don’t mean it’s geographically or spatially a bit beyond it: we mean it’s in another world altogether; in a different mode of existence.

There’s a similarly questionable treatment of location in Velmans’ references to those materialists who would see experience as constituted by functions or patterns of neuron firing in the brain. Velmans again attributes to such people the belief that experience is actually located in the brain. They might well agree, but perhaps with some reservation: I think many or most functionalists, for example, would distinguish between a particular instantiation of a function, which certainly exists in a physical place in the brain, and the function itself, which could be run by other brains, and which exists in some Platonic realm where spatial position is irrelevant or meaningless.

The main problem for me, as with some other versions of externalism, is whether reflexive monism delivers the simplification it seems to promise or merely relocates the problem. Velmans provides diagrams which illustrate the difference between a dualist view, where the phenomenal perception floats above observer and object in a mental/spiritual world, a reductionist view where the percept is similarly dangling, and his own, where we have the object (a cat, as it happens), the observer, and nothing more than a couple of arrows. The trouble is, we still actually have the real cat and the perceived, phenomenal one: Velmans has pulled off a sly bit of legerdemain and shuffled one under the other.

Velmans spends some time expounding the idea of ‘perceptual projection’ – that the phenomenal world is projected out there into physical space – and defending himself against the charge of smearing real cats with phenomenal cat-perception stuff; but I think there is a worse difficulty. The phenomenal experience may have been projected out of our skulls, but it’s still all we get to deal with, and that seems to leave us dangerously isolated, close to the beginning of the broad and easy downward path which leads to solipsism. It’s not so much that the danger is inescapable – more that I’m left wondering whether taking all those phenomenal experiences out into the external world actually changed things all that much.

Velmans wraps up with an exposition of how his view impinges on the hard problem. In essence, he thinks that when we’ve grasped reflexive monism properly, we will see that the fact that the world has two different aspects is just one of those features which, although they are slightly mysterious, there is no need to worry about. We don’t agonise, he suggests, over the “hard problems” of physics – why does an electric current in a wire give rise to a magnetic field? Why do electrons behave like waves in some circumstances and like particles in others? Why is there any matter in the Universe at all?

Actually, I think people do agonise over those problems, as it happens. I must have spent a considerable number of shortish and frustrating periods of time wondering in vain why there was anything.

Picture: Lehar argument. Steve Lehar, who has carried on a long dialectic with Max Velmans, kindly wrote to express sympathy for some of the points above. There is a charming exposition of his views in cartoon form here.

Gestalt Isomorphism and the Primacy of Subjective Conscious Experience gives a more formal version, with a response from Velmans here and more here.

Joycean Consciousness

Picture: James Joyce. I’ve been reading David Lodge’s ‘Consciousness and the Novel’. Lodge points out that novelists share with cognitive scientists the aim of exploring human consciousness, but have characteristically different methods and concerns; typically concentrating on the idiosyncratic, qualia-ridden details of individual experience rather than seeking to extract generalisable scientific laws.

Lodge is good at technical analysis of the means by which novelists obtain their effects (I’m surprised he manages to write readable fiction while carrying such a weighty theoretical apparatus in his head – I should have thought it would induce a paralysing self-consciousness). In the first chapter he presents the development of narrative styles as being in part a search for more effective ways of conveying the reality of human conscious experience. In particular, he interprets modernist and stream-of-consciousness novels as attempting to break through the stale conventions of the traditional novel and give an altogether more immediate impression of consciousness from the inside. James Joyce is in his eyes the most radical and successful of the authors to make this attempt – in fact

‘He came as close to representing the phenomenon of consciousness as perhaps any writer has ever done in the history of literature.’

Well, up to a point. I think the Joycean style (as exemplified in Ulysses) is simply based on a new and different literary convention which is in reality at least as far from ordinary consciousness as the prose of Jane Austen. That explains why Joyce is, frankly, hard going at times and remains the preserve of the intellectual reader. If it were a vivid unmediated gateway into the phenomenal experience of the characters, Joycean prose would be the easiest to write and read and he would surely be among the most popular and accessible of authors.

This sort of paradoxical development – the attempt at nature which produces new artifice – is far from unique in Eng Lit, of course: it’s hard for modern readers to believe that the Lyrical Ballads, for example, could be presented as a breakthrough in the use of everyday language shorn of poetic affectation, yet start with something as peculiar as The Rime of the Ancient Mariner. But consideration of what Joyce does and does not achieve is interesting so far as the nature of conscious experience is concerned.

What are in fact the contents of consciousness? Drawing on introspection I find that some of them are indeed properly formed verbal sentences: as I sit here writing, fragments of text are replayed through my mind, adjusted and echoed immediately before being typed. Explicit words feature in my consciousness on other occasions too, but these occasions seem to me the exceptions and full-blown English sentences are certainly not the chief medium of my conscious experience.

That chief medium seems to be composed of thoughts which are as clearly focussed and meaningful as pieces of text, but have none of the same structure. I may think that a noise outside is probably the postman: to think so takes no appreciable time and does not involve the rehearsal of a formula over one or two seconds, as verbal thought might do. At the same time, I can go on thinking the same thought for a while if I choose, and when I think something else the transition may be abrupt, but is often smooth and unclear.

This is partly because, beside the mainstream thoughts there are a number of other candidate thoughts lurking in the penumbra of consciousness. I may be thinking about what I’m writing, but I’m sort of aware that I might think about whether it’s time to leave quite soon. A transition between one thought and another may therefore be a matter of one coming forward and another receding – though it can be the sudden intrusion of an altogether new thought, too. Although this penumbra business is undoubtedly vague, I think it is always clear that only one thought is actually being thought at any given time.

At the risk of schematising too much, we could say that the mainstream of thought is accompanied by an intermittent superstructure of explicit verbal thoughts and often by a hazy underground of potential thoughts. But the mainstream itself is fluent and non-verbal, though just as meaningful as words. All my thoughts, moreover, come with a kind of background. When I think that a sound outside might be the postman, I do not have to think about the history of the Post Office, my uncle who once worked for it, or the items of post I am expecting some time soon; but all those items are somehow readily available; not in quite the same way as the candidate thoughts I mentioned above – they’re not sort of wanting to be thought about, they’re just available.

Besides this, there is a general awareness of the world around me. Sometimes this recedes into a penumbra like the one in which the candidate thoughts lurk – I’m not quite sure whether it is in fact the same penumbra, nor whether the spotlight of attention is the same for thoughts and perceptions. I rather think I can pay attention to a perception and a thought at the same time, but it may be that they have to jostle for essentially the same attention, and at best merely alternate at the forefront of my mind. I’m not quite sure.

The objects I do perceive all come with a ready-made interpretation. When I look at the screen, I see the text of a blog post, but if I wish I can change the interpretation and see characters in a particular font, arrays of pixels, or my computer. Again, I don’t quite know whether these interpretations are the same kind of thing as the background awareness that comes with my thoughts. There seems to be a similarity, but the interpretations are required in a way that the background isn’t. I can’t look at the screen without giving it some interpretation, but I can think about the postman while ignoring all the associations that he brings with him.

A special case of my awareness of the world is my awareness of my own body, which I think exceptionally does not require interpretation (there’s only one thing a pain in my foot can be), but which is closely associated with my awareness of my own emotions. Emotions may be linked with a racing heart or a tightness in the stomach, but they are also linked with the most complex and sophisticated of my thoughts. I have a general emotional background (fairly calm at the moment, but tinged with some slight tension about when I need to leave the house, and the prospect of seeing the dentist later), and I also have emotional reactions to particular perceptions and thoughts.

I have certainly omitted and misrepresented a good deal in the foregoing, but I hope this sketch of the contents of consciousness is recognisable enough.

Novelists of any school clearly face some severe problems. One is the inherent indescribability of the qualia which make up or accompany our perceptions, and of the inner nature of emotions. Two others appear equally insoluble. One is how to represent thoughts which are meaningful, but not verbal in form. One way is to translate the thoughts into words, and present them as if they were like utterances of the character in question: but this short-changes us on the fluid and instantaneous nature of thought, and breaks the link with the background; it also destroys the difference between mainstream thoughts and thoughts in actual words. The alternative is to describe the content of the thought, but this loses the sense of vivid intentionality which thoughts have and condemns us to a third-person view, outside the consciousness being depicted.

The other insoluble problem is how to present the simultaneous complexity of consciousness when the structure of prose obliges us to relate one thing at a time in sequence.

Traditional novelists use a mixture of translation and description to depict a character’s thoughts, adjusting the balance to suit their own purposes. Lodge says the ultimate development in this direction is free indirect style, in which the thoughts are translated into words but not presented as quotation. He regards Jane Austen as supreme in the use of this style; it is a particularly flexible and fluid approach which helps the author get the best of both translation and description while remaining lively and vivid.

I believe the second problem is mainly addressed in traditional novels by the use of carefully chosen implication, and by the even more powerful tool of implicature, where something omitted or added from the account indicates to the reader a large amount of subsidiary information which is not explicitly given in the text. This approach allows the author. to some extent at least, to evoke some of the background to the character’s thoughts and also to indicate more than one level of consciousness at once.

However, although these strategies work quite well in traditional novels, there is a degree of artifice about them which may detract from the reality of the portrayal, and distance the reader from the consciousness of the character. How much better, then, to simply transcribe the stream of consciousness as it actually occurs in all its confusing heterogeneity? The trouble is that the two problems touched on above are still there. What Joyce tends to give us, in the famous last chapter of Ulysses, for example, is a straight translation of the mainstream of thoughts. Sentences and other grammatical structures are ignored or broken down to help indicate the rapidity and fluidity of the mainstream, but in essence we only get one level of consciousness. Since the author has undertaken to give us everything at that level he can no longer build in helpful implicatures by adding or leaving out carefully chosen items: the reader has to cope with the whole flood of thoughts, and is forced to do more work than normal to try to infer the background. An unexpected side effect is that not much emotional tone comes through: the words seem rather like the affectless, breathless drone of someone in a trance or in a state of delirium. Joyce succeeds in making it readable by actually practising some covert selection of items and moderating the grasshopper leaps and repetitions which I think would occur in a more rigidly realistic transcription of mainstream consciousness.

Does it work, on the whole? I don’t think it delivers what was promised. In 1922 Joyce said:

“In Ulysses I have recorded, simultaneously, what a man says, sees, thinks and what such seeing, thinking, saying does to what you Freudians call the unconscious…”

which I think pretty much amounts to a claim to have delivered all the contents of consciousness sketched out above. I don’t think Joyce really did this, or if he did, it was by the crafty use of traditional novelistic skills within a modernist presentation. But that doesn’t mean Ulysses isn’t interesting and a considerable achievement.

I think the right way to see it is in the context of a general resort to unconventional means in the arts which took place in the nineteenth and twentieth centuries. By the middle of the nineteenth century or thereabouts, the technical problems of painting, music and writing had pretty much been solved to the extent that they ever could be, and optimum conventions and methods had been established. Our culture being what it is, it was impossible for creative people to sit back and spend a few centuries exploiting these achievements: they had to press on and deliver something new. Since there were no more problems to solve, this novelty had to come from approaches that were radically different rather than improvements on what had gone before: various forms of unrealism and abstraction, twelve-tone or atonal music – and the stream of consciousness in the novel. These novel approaches tended not to lead on to anything – Joyce has no real successors – nor to achieve great popularity, but they did at least open up new imaginative and creative possibilities.

So far as consciousness is concerned, we can perhaps see Joyce’s work as a bold attempt to offer an analysis of consciousness as a single, transcribable phenomenon: in my eyes it proves the opposite: that consciousness is inherently complex and no single simple theory can capture the whole thing.

Benjamin Libet

Picture: Libet. I’ve just heard that Benjamin Libet died at the end of July.

His famous experiments remain surprising and controversial, and in fact I remember discounting them altogether when I first heard about them. It must have been in about 1985, I think: we were sitting on rickety chairs in the trademark squalor of Gordons Wine Bar, and I was giving the company the benefit of what I supposed were my more sophisticated views on the subject of free will.

“That’s all very well,” said a young scientist, “But there’s this bloke who’s proved experimentally that free will does not exist… Libet, I think. I was reading about it last week.”

“That sounds interesting,” I said, “But I don’t think that’s possible in principle. Freedom is not an observable physical property, so the issue is beyond the reach of empirical methods. Perhaps you’re thinking that free will requires some kind of causal discontinuity, but that isn’t actually the case.”

“Well, these experiments show that the decision to act is made before we become aware of it. That proves our conscious thoughts about the decision don’t determine which way it goes.”

“I don’t really see how you could determine experimentally when a decision is made, other than by asking the experimental subject,” I said, “It’s not as if you can read off the contents of someone’s mind from an encephalogram.”

“I’ll get you the reference.” he said – alas he never did, and it was some years before I got round to mitigating my ignorance of Libet’s experiments. Libet himself seems to have been a thoughtful, complex man, very far from the gung-ho debunker I had at first envisaged.

Whatever the fate of Libet’s theory about a conscious mental field, his experiments are surely classics in the field and guarantee him a permanent place of honour in its history.

Teacups and mirrors

Picture: teacups. Mirror neurons have been widely described as a crucial discovery and possibly ‘the next big thing’ (I’m not sure, when I come to think about it, what the last big thing was). Ramachandran describes them as ’empathy neurons’ or even ‘Dalai Lama neurons’, and others have been almost equally enthusiastic. But are they really so good? The trenchant title of a short paper by Emma Borg asks ‘If mirror neurons are the answer, what was the question?’.

Your mirror neurons fire both when you perform an action, and when you see someone else perform that action. Borg contrasts them with ‘canonical neurons’, which fire in response to an object offering the right kind of affordances. In other words, if I’ve got it right, we have a large group of neurons that fire when we, for example, take a sip of tea: some of them are mirror neurons which also fire when we see someone else drink; others are canonical neurons which also fire when we see a cup or a teapot – ‘tea-drinking things’.

At a basic level, the argument that mirror neurons might help to explain empathy, or our understanding of other people, is clear enough. When I see A do x, the mirror neurons mean my mental activity has at least some limited features in common with A’s (presumed) mental activity, or at least what A’s mental activity would be if A were me. You can see why this resembles telepathy of a sort, and it seems a natural hypothesis that it might form the basis of our understanding of other people. One of the many theories on offer to explain autism, in fact, holds that it is caused by a deficiency in mirror neuron activity. Apparently there is evidence to show that autistic people don’t show the same kinds of activity in the relevant regions as normal people when they observe other people’s behaviour. It could be that the absence of mirror neuron activity has left them with no basis for a ‘theory of mind’: of course it could also be that the absence of an effective theory of mind, caused by something else altogether, is somehow suppressing the activity of their mirror neurons.

Borg’s target is the idea that mirror neurons in themselves give us the ability to attribute high-level intentions to other people, by running simulated intentions of our own that match the observed actions of the other person. The initial idea is roughly that when we see someone lift a cup, some of our neurons start doing that tea-cup lifting thing in sympathy (off-line in some way, of course, or we should grab a cup ourselves). This is like harbouring the intention of lifting the cup, but we are able to attribute the intention to the other person. However, this only gets us as far as the deliberate lifting of the cup: it has been further claimed that mirror neurons give us the ability to deduce the over-arching intention – drinking a cup of tea. The claim is that mirror neurons not only resonate with the current action but also more faintly (or rather, in smaller numbers) with the next likely action, and this provides a guide to the higher-level activity of which the single act is part.

Borg points out that actions in themselves are highly ambiguous. I may lift a cup to test its weight, or to stop you getting it, rather than in order to drink from it. It’s certainly not the case that every basic act dictates its successors, or we should be trapped in a cycle of stereotyped behaviour. When we run our mental simulation then, how can we know which secondary echoes we need to start off in our mirror neurons – unless we already know which higher-level course of action we are dealing with? In short, mirror neurons are not enough unless we already have a working theory of mind from some other source.

We might argue that we don’t need to know the intention in advance, because the simulation allows us to test out several different higher-level courses of action at once. But again, the mere observation of the single act before us won’t allow us to choose between them. In the end we’ll always be driven back to appealing to something more than mere mirror neuron activity. None of this suggests that mirror neurons are uninteresting, but perhaps they are not, after all, going to be our Rosetta Stone in deciphering the brain.

Borg describes her argument as anti-behaviourist, resisting the idea that intentions and other ‘mentalistic’ states can be reduced to simple patterns of activity. Fair enough, but given that behaviourism doesn’t put up much of a fight these days, it may be more interesting that it bears a distinct resemblance – or so it seems to me – to many other problems which have afflicted attempts to reduce or naturalise intentionality, up to and including the frame problem. It’s as though we were trying to find a way through an impenetrable hedge: every so often someone finds a promising looking thin patch and starts to shove through; but sooner or later they meet one or another stretch of suspiciously-similar looking brick wall.

Axioms of consciousness

Picture: Kernel Architecture.

Picture: Bitbucket. I’ve been meaning to mention the paper in Issue 7 of the current volume of the JCS which had about the most exciting abstract that I remember seeing. The paper is “Why Axiomatic Models of Being Conscious?” . Abstracts of academic papers don’t tend to be all that thrillingly written, but this one, by Igor Aleksander and Helen Morton, proposes to break consciousness down into five components and offers a ‘kernel architecture’ to put them back together again. It offers ‘an appropriate way of doing science on a first-person phenomenon.

The tone was quite matter-of-fact, of course, but merely analysing consciousness to this extent seems remarkable to me. The mere phrase ‘axioms of consciousness’ has the same kind of exotic ring as ‘the philosopher’s stone’ in my mind: never mind proposing an overall architecture. Is this a breakthrough at last? Even if it’s not right, it must be interesting.

Picture: Blandula. Yes, my eyebrows practically flew off the top of my forehead at some of the implied claims in that abstract. But not altogether in a good way. Perhaps we really are in the same realm of myth and confusion as the philosopher’s stone. Any excitement was undercut by the certainty that the paper would prove to have missed the point or otherwise failed to deliver; we’ve been here so many times before. Moreover, you know, even those axioms are not exactly a new discovery – Aleksander, with a different collaborator, first floated them in 2003.

Picture: Bitbucket. OK, so maybe I missed them at the time, but surely we ought to look at them with an open mind rather than assuming failure is a certainty? The five are stated in the first person, in accordance with the introspective basis of the paper.

  1. Presence: I feel that I am an entity in the world that is outside of me
  2. Imagination: I can recall previous sensory experience as a more or less degraded version of that experience. Driven by language, I can imagine experiences I never had.
  3. Attention: I am selectively conscious of the world outside of me and can select sensory events I wish to imagine.
  4. Volition: I can imagine the results of taking actions and select an action I wish to take.
  5. Emotion: I evaluate events and the expected results of actions according to criteria usually called emotions.

That seems an interesting and pretty comprehensive list, though clearly it’s impossible to adopt a bold approach like this without raising a lot of different issues.

Picture: Blandula. You can say that again. Just look at number 5, for example. Apparently emotions are criteria? My criteria were so strong I just burst into tears? The sight of my true love’s face filled my heart with an inexpressibly deep criterion? And their role is to help evaluate events and the result of actions? I mean, apart from the fact that emotions generally interfere with our evaluation of events and actions, that just isn’t the essence of emotion at all. I can sit here listening to Bach and be swept along by a whole range of profound emotions which I can hardly even put a name to – I feel exalted, energised, but that doesn’t really cover it – without any connection to events or possible actions whatever.

If that isn’t enough, Aleksander and Morton claim to be developing an introspective analysis, rather than a functional one. But emotions, it turns out, are basically there to condition our actions. So that’s an introspective definition?

Picture: Bitbucket. Not the point. We’re not trying to capture the ineffable innerness of things here, we’re trying to set up a scientific framework; and from the vantage point of that framework – guess what? It turns out the ineffable innerness is entirely negligible and adds nothing to our scientific understanding.

I said in the first place there are two parts to the enterprise: the analysis and then the construction of the architecture. The analysis starts from an introspective view, but it would be absurd to think an architecture could have no regard to function. If that pollutes the non-functionalist purity of the approach from a philosophical point of view, who cares? We’re not interested in which scholastic labels to apply to the theory, we’re interested in whether it’s true.

Picture: Blandula. Well, you say that, but Aleksander and Morton seem to want to draw some philosophical conclusions – and so do you. They reckon their analysis shows that there is no real ‘hard problem’ of consciousness. I’m not convinced. For one thing, their attack seems to be quite tightly tied to the Chalmersian version of the hard problem. If subjective states can’t be rigorously related to physical states, they say, it opens the way to zombies, people who are physically like us but have no inner life. That would be Chalmers’ view, maybe, and I grant that it has attained something close to canonical status. But it’s quite possible to disbelieve in the possibility of zombies and still find the relation between the physical and the subjective profoundly problematic.

Much more fundamental than that, though, their whole approach seems to me yet another instance of a phenomenon we might call ‘scientist’s slide’. Virtually all the terms in this field can be given two values. We can talk about a robot’s ‘actions’, meaning just the way it moves, or we can talk about ‘actions’, meaning the freely-willed deliberate behaviour of a conscious agent. We can talk about our PC ‘thinking’ in an inoffensive sense that just means computational processing, or we can talk about ‘thinking’ in the subjective, conscious sense that no-one would attribute to an ordinary computer on their desk. To put it more technically, the same words in ordinary language can often be used to refer either to access consciousness, a-consciousness, the ‘easy problem’ kind, or to phenomenal consciousness, p-consciousness, the ‘hard problem kind’.

Now over and over again scientists in this area have fallen into the trap of announcing that their theory explains ‘real’ p-consciousness, but sliding into explaining a-consciousness without even noticing. Not that the explanation of a-consciousness is trivial or uninteresting: but it ain’t the hard problem. I suspect it happens in part because people with a strong scientific background have to overcome a lot of ingrained empiricist mental habits just to grasp the idea of p-consciousness at all: they can do it, but when they get involved in developing a theory and their attention is divided, it just slips away from them again. Not meaning to be rude about scientists: it’s the same with philosophers when they just about manage to get their heads round quantum physics, but in discussion it transmutes into magic pixie dust.

Picture: Bitbucket. No, no: you’re mistaking a deliberate assertion for a confusion. It’s not that Aleksander and Morton don’t grasp the concept of p-consciousness: they understand it perfectly, but they challenge its utility.

Let me challenge you on this. Take a case where we can’t shelter behind philosophical obfuscation, where we have to make a practical decision. Suppose we talk about the morality of killing or hurting animals. Now I believe you would say that we should not harm animals because they feel pain in somewhat the same way we do. But how do we know? Philosophically you can’t have any certainty that other humans feel pain, let alone animals.

In practice, I submit, you base your decision on knowledge of the nervous system of the creature in question. We know human beings have brains and nerves just like ours, so we attribute similar feelings to them. Mammals and other creatures with large brains are also assumed to have at least some feelings. By the time we get down to ants, we don’t really care: ants are capable of very complex behaviour, maybe, but they don’t have very big brains, so we don’t worry about their feelings. Plants are living things, but have no nervous system at all, and so we care no more about them than about inanimate objects.

Now I don’t think you can deny any of that – so if we stick to practical reasoning, what are we going to look at when we decide the criteria for ‘proper’ consciousness? It has to be the architecture of the brain and its processes. That’s where the paper is aiming: rational criteria for deciding such issues as whether an animal is conscious or whether it has higher order thought. If we can get this tied down properly to the relevant neurology, it may, you know, be possible to bypass the philosophy altogether.

Lost in the Woods

Picture: David Gelernter. David Gelernter says that the project of strong artificial intelligence – creating a real, human-style consciousness – is lost in the woods, and that it is highly unlikely, only just short of impossible, that a real mind will ever be put together out of software. Although he believes strongly in the value of less ambitious AI projects, Gelernter has some form as a sceptic, having debated the subject last year with Ray Kurzweil, whose faith in the future of technology is of course legendary.

Though he mentions Jerry Fodor with approval Gelernter appears to be in the main a loyal follower of John Searle so far as philosophy is concerned, quoting the famous Chinese Room thought-experiment and repeating Searle’s line that a computer simulation of rain doesn’t make you wet. This is very well-trodden ground, where Searle’s allies and enemies long ago reached a kind of impasse, and new insights are unlikely to be found: moreover, with that ‘almost’ impossible, Gelernter actually pulls the punch in a way that Searle would certainly never do.

Gelernter suggests that AI has looked towards digital computation for two main reasons; first digital computers seemed in some ways like brains, and second, computation is the leading technology of our day, naturally seen as the best candidate for any theoretically daunting challenge. I think this understates the case a bit. Historically, as far back as Babbage and Turing, the creators of digital computers didn’t just design their machines and then notice a resemblance to human brains; they set out to make machines that did what brains do. It’s not unreasonable that machines designed in imitation of brains should be what we turn to when we want an artificial mind. Moreover, it’s not just that digital computers are the cutting edge of technology at the moment; they have an open-ended flexibility – they are universal machines after all – which is far more brain-like than any other artefact we can imagine. So while it may be right or wrong for AI practitioners to look towards digital computation, there are pretty good reasons for them to do so.

Gelernter’s main proposition, in any case, concerns what he calls the cognitive continuum. Consciousness is not just on or off, he observes: it may be closely focussed on a specific object, or it may be in a freer, looser state, in which the mind is more prone to wander. These looser states are not just the brain working inefficiently: they exhibit the mind’s ability to associate freely, think emotively, and come up with analogies. These kinds of thinking, especially analogical thinking, are crucial to the nature and success of human consciousness and they deserve more attention within AI. Gelernter himself describes these as only pre-theoretical ideas, but I think he’s certainly touched on an interesting area. There’s no doubt he’s right about consciousness having fuzzy edges, as it were. It would be great to have the range of possible different conscious states properly clarified, but I think that would involve more than a single spectrum – there are surely several different variables at work. To quote just a few examples, besides being focussed or diffuse, our thoughts may be explicit or implicit; they may operate in several different representational modes (thinking in pictures, in words, or neither, for example); they may be accompanied by second-order thoughts (awareness of our awareness) or not (though some would dispute that); they may be under our deliberate direction, roaming free, or directed by events in the world. They may even be operating in two or more different ways at once, as when we mentally plan a meeting while driving along a busy road.

Gelernter’s idea seems to be that when we are concentrating on something, our ideas are operating under close control and they do as they’re told: when our minds are wandering, the ideas can collide with each other more or less at random and produce fruitful results. I think he’s right to stress the importance of analogical thought, but I’m not altogether sure that it is uniquely enabled by a loose focus. Sometimes when we’re thinking hard about a specific problem we may cast about for ideas quite deliberately or look for telling analogies. The cases where a good idea comes into our mind when we’re thinking of something else, or of nothing, actually strike us as remarkable, and that may be why we remember and emphasise them more than the cases where a good analogy was put together by careful and conscious hard work.But even if we suppose that a relaxed state of mind is conducive to analogising, that is certainly only part of the story, perhaps the less interesting part. Good analogies don’t come from the random combination of ideas – that would be a hopelessly inefficient way of generating useful thoughts – they embody threads of meaning. I suspect that what is going on has something to do with ideas lurking in the mind at a subliminal level – perhaps the relevant neurons are firing too slowly for the thought to be conscious, but fast enough to predispose the mind towards a particular thought. When a related idea, or latent idea, comes along, it is enough to tip a fully-formed new thought into consciousness. I’m getting a bit pre-theoretical here myself, but at least I think it’s clear that the old problem of relevance has something to do with this.In fact see an analogy (ha) with iconic representation.

One of the many proposed routes for constructing a theory of meaningfulness is that it starts with simple iconic representation and builds up from there. If you want someone to think of a man, show them a man, goes the theory: if you can’t arrange for a man to be present, do a drawing. The marks on the paper share certain properties of shape and appearance with a real man, and so they call the same thought to mind. Stylise your man a bit, and you have a symbol and before you know where you are you’ll be covering your pyramid with votive inscriptions. But it’s not as simple as that. If I paint a stick man on the wall, it might mean ‘man’, but it might mean ‘I was here’, or ‘painting’ (or of course, ‘gentlemen’s toilet’). For iconic representation to work, we have to pick out which of the many properties of the icon are relevant: humans are very good at this, but digital computers can’t really do it at all. In the same way, a good analogy links two things in different realms whose relevant properties are the same while their other properties are entirely different.

You would perhaps expect Gelernter to make this analogical thinking another basis for his scepticism about strong AI, but in fact he believes the process could and should be simulated by computer, giving us a new generation of vastly more useful artificial intelligence working to principles much more like those of the human brain. It would look a lot more like human consciousness but, he says, it would not be real – just a simulation. I think he is bound to run into new varieties of the old problems with relevance which in various shapes have dogged the progress of AI for many years, but it’s interesting to contemplate the strange situation which would arise if he succeeded. Hypothetically, Gelernter’s machine would be talking and behaving like a conscious human: many, I dare say, would be happy to accept that it had consciousness of some sort – but not Gelernter.