Philosophy and Neuroscience

Picture: Philosophy and Neuroscience. I’ve just caught up a bit belatedly with The Philosophy and Neuroscience Movement, (pdf) the paper of Pete Mandik’s, written with Andrew Brook, which he featured on his blog at the beginning of the month. It’s an interesting read and seems to have garnered quite a bit of attention, including a discussion (rather foggy, I thought) on Metafilter.

The general question of relations between science and philosophy has of course been a vexed one ever since the distinction started to be made (I don’t think someone like Isaac Newton would have seen any particular gulf between the two). Some scientists speak of philosophers the way an aristocrat might talk about the gypsies encamped on the edge of his land: with patronising disapproval, but also with a slight secret fear that they might have obscure magic secrets which could ruin the scientific harvest: philosophers for their part have been largely scared away from the grand metaphysical uplands by the obvious dangers in ontologising without a firm grasp of modern physics.

When it comes to philosophy of mind and neuroscience the intertwining of the issues creates an especially great need for co-operation and interaction, and perhaps holds out some prospect of more of a meeting on equal terms. Some thinking along these lines must, at any rate, have lain behind the movement for working together which Brook and Mandik say began about twenty-five years ago; they rightly point out that the influence of scientific advances has transformed the way we think about language, memory and vision (in fact scientific understanding of the visual system has been influencing philosophical ideas for hundreds of years: the discovery of images on the retina must surely have predisposed philosophers towards thinking in terms of an homunculus, a ‘little man’ sitting and watching the pictures, and perhaps still adds some weight to the perceived importance of mental images as key intermediaries between us and reality.

But I think it’s still not uncommon for people on either side to assume that they can get on quite well on their own. Neuroscientists may be tempted to think they should just get on with the science, and let the philosophy take care of itself: maybe the philosophical answers will come along a s a kind of bonus with the scientific results – and if they don’t, well philosophers never answer anything anyway, do they? Philosophers, equally, may suppose the science is a matter of detail – oh, by the way, they tell me that the firing of c-fibres is not actually equivalent to pain, as it turns out, but it doesn’t really matter if it’s, you know, f-fibres or z-fibres: for the sake of brevity in this discussion let’s just pretend it is c-fibres…

Hence in part, I suppose, the paper, which discusses (necessarily in brief and summary form) a number of interesting areas of interaction which show how much there is to be gained by positive engagement.

An interesting one is the suggestion that neuroscience and philosophers of neuroscience have greatly strengthened the case against nomological (law-based) theories of scientific explanation. Physics tends to be taken as the paradigmatic science, but one of its leading characteristics is its amenability to nomological treatment. Physics produces clear universal laws which work with precise mathematical accuracy. In biology,things are very different, and we tend to have to work with explanations which are teleological (concerned with the purpose of things) or statistical in nature. This appears to be especially true in neuroscience, where the brain exhibits complex structure but does not seem to operate under simple laws.

Another area I found thought-provoking was the claim, which I found surprising at first, that new neuroscientific tools have led to a renaissance in introspective studies. Introspection, the direct inward examination of the contents of one’s own consciousness, was a no-go area for many years after the collapse of attempts to systematise introspectionist findings (indeed, we’ve been reading recently in the JCS about the disgust with introspectionism that led J.B. Watson to set up as a radical behaviourist, declaring that there actually were no damn contents of consciousness). The trouble with introspection has always been that the results are chaotic and unconfirmable: training designed to improve results raises the new danger of bias and getting out of your subjects only what you trained into them. How could this situation possibly be reclaimed?

Yet it’s true when you come to think of it that many of the numerous studies with fMRI and other scanners which have taken place in recent years have relied on introspective reports. The thing is, of course, the scanners provide an avenue of confirmation and corroboration which Wundt and Titchener never had and legitimise introspective reports. If it weren’t already so securely fastened, this would be another nail in the coffin of radical behaviourism.

A third point among many which particularly struck me was the issue of neural semantics. Consideration of the brain and nervous system as information processors takes us into the philosophically intractable area of intentionality. The real difficulties (and the most interesting issues) here are well-known to philosophers but easily overlooked or undervalued by scientists. However, neurobiology offers influential examples of feature-detection mechanisms which might (or might not) point the way to a proper analysis of meaning. My personal view is that this is a particularly promising area for future development, where ancient mysteries might really be dispelled in part by future research.

The paper concludes with a rather downbeat look at consciousness itself. Faced with claims that consciousness is in part not neural, or even physical, many neuroscientists (and their ‘philosophical fellow-travellers’) ignore them or ‘throw science at it’, the authors say. They rightly consider this a risky approach, liable to lead to the familiar syndrome in which the scientists explains a toy-town or simplified conception of consciousness, leaving the really tough problem breathing unacknowledged down their necks. This might be a slightly gloomy view, but it is impossible to disagree when the authors call for better rejoinders to such claims.

Anthropomorphism

Picture: Rupert and Trepur. I came across A defense of Anthropomorphism: comparing Coetzee and Gowdy by Onno Oerlemans the other day. The main drift of the paper is literary: it compares the realistic approach to dog sentiment in J.M.Coetzee’s Disgrace with the strongly anthropomorphic elephants in Barbara Gowdy’s The White Bone. But it begins with a wide-ranging survey of anthropomorphism, the attribution of human-like qualities to entities that don’t actually have them. It mentions that the origin of the term has to do with representing God as human (a downgrade, unlike other cases of anthropomorphism), notes Darwin’s willingness to attribute similar emotions to humans and animals, and summarises Derrida’s essay The Animal That Therefore I Am (More To Follow). The Derrida piece discusses the embarrassment a human being may feel about being naked in front of a pet cat (I didn’t know that the popular Ceiling Cat internet meme had such serious intellectual roots) and concludes that taking the consciousness of animals as seriously as our own threatens one of the fundamental distinctions installed in the foundations of our conception of the world.

That may be, but the attribution of human sentience to animals is rife in popular culture, especially when it comes to children. Some lists of banned books suggest that the Chinese government has cracked down on many apparently harmless children’s books; it turns out this is because at one time the Chinese decided to eliminate anthropomorphism from children’s literature, wiping out a large swathe of traditional and Western stories. I can’t help feeling a small degree of sympathy with this: a look at children’s television reveals so many characters who are either outright talking animals, or (even stranger) humanoids with animal heads, you might well conclude there was some law against the depiction of human beings. It would surely seem odd to any aliens who might be watching that we were so obsessed with the fantasised doings of other species.

Or perhaps it wouldn’t seem strange at all, and they would merely make plans for a picture-book series about Hubert the Human, his body spherical with twelve tentacles just like a normal person, but his head displaying the strange features of Homo Sapiens. It seems likely that our fondness for anthropomorphism has something to do with our marked tendency to see faces in random patterns: our brains are clearly set up with a strong prejudice towards recognising people, or sentience at least, even where it doesn’t really exist. Such a strong tendency must surely have evolved because of a high survival value – it seems plausible that erring on the side of caution when spotting potential enemies or predators, for example, might be a good strategy – and if that’s the case we might expect any aliens to have evolved a similar bias.

That bias is a problem for us when it comes to science, however. When considering animal behaviour, it seems natural, almost unavoidable, to assume that the same kind of feelings, intentions and plans are at work as those responsible for similar behaviour in humans. After all, humans are animals. It’s clear that other animals don’t make such complicated plans as we do; they don’t talk in the same way we do and don’t seem to have the same kinds of abstract thought. But some of them seem to have at least the beginnings or precursors of human-style consciousness.

Unfortunately, careful observation shows beyond doubt that some forms of animal behaviour which seemed purposeful are really just fantastically well-developed instincts. The sphex wasp seems to check its burrow out of forethought before dragging its victim inside; but if you move the victim slightly and make it start the pattern again, it will check the burrow again, and go on doing so tirelessly over and over, in spite of the fact that it knows, or should know, that nothing could possibly have gone into the burrow since the last check.

A parsimonious approach seems called for. The methodologically correct principle to apply was eventually crystallised in Morgan’s Canon:

‘In no case may we interpret an action as the outcome of the exercise of a higher mental faculty, if it can be interpreted as the exercise of one which stands lower in the psychological scale’

In effect, this principle sets up a strong barrier against anthropomorphism: we may only attribute human-style conscious thought to an animal if nothing else – no combination of instinct, training, environment and luck – can possibly account for its behaviour. I said this was ‘methodologically correct’, but in fact it is a very strong restriction, and it could be argued that if it were rigorously applied, the attribution of human-style cognition to certain humans might begin to look doubtful. According to Oerlemans, ethologists have been asking whether, by striving too hard to avoid anthropomorphism, we haven’t sometimes denied ourselves legitimate and valuable insights.

It’s interesting to reflect that in another part of the forest we are faced with a similar difficulty and have adopted different principles. Besides the question of when to attribute intelligence to animals, we have the question of when to do so for machines. The nearest thing we have to Morgan’s Canon here is the Turing Test, which says that if something seems like a conscious intelligence after ten minutes or so of conversation, we might as well assume that that’s what it is. Now as it happens, because of its linguistic bias, the Turing Test would not admit any animal species to the human level of consciousness; but it does seem to be a less demanding criterion. Perhaps this is because of the differing history in the two fields: we’ve always been surrounded by animals whose behaviour was intelligent in some degree, and perhaps need to rein in our optimism; whereas there were few machines until the nineteenth century, and the conviction that they could in principle be intelligent in any sense took time to gain acceptance – so a more encouraging test seems right.

Perhaps, if some future genius comes up with the definitive test for consciousness, it will lie somewhere between Morgan and Turing, and be equally applicable to animals and machines?

Reflexive Monism

Picture: Cat diagram.

Max Velmans has produced Reflexive Monism as a valiantly renewed effort to sort out the confused story of the relations between observer, object, and experience. This is one of those subjects that really ought to be perfectly straightforward but in fact has descended into such a dense thicket of philosophical clarification that the scope for misunderstanding and talking past one another is huge. In my more pessimistic moments I wonder whether the issue is really reclaimable at all, at least by means of further discussion.

Velmans’ view apparently stems from a revelation he experienced when he noticed that the world he had heretofore regarded as the public, objective, external one – the world we all experience, full of cats and a number of other things – was in fact a phenomenal world, a world as experienced by him. Those things out there are our experiences of the world, and so reflexive monism belongs with the externalist theories that seem to have become popular recently. It is also a dual aspect theory; that is, the one underlying stuff in which all monist s must believe expresses itself in two ways, as the objective physical world and as the consciously experienced phenomenal world we actually see out there.

Dual aspect theories seem attractively sensible, and very probably true as far as they go; but I think one can be pardoned for still feeling slightly unsatisfied by them. Okay, so the world doesn’t consist of two kinds of stuff, it just has two aspects; but to round that out into a proper explanation we need an account of what an aspect might be. Ideally we also want an account of the nature of the one underlying stuff which would explain why on earth it expresses itself in two different ways. These accounts are not easy to give: in fact it is quite difficult to say anything at all about the fundamental stuff, much as all good metaphysicians must wish to do so. I don’t think these issues are quite so problematic for the poor benighted dualists, who have a good reason why things might appear in two different guises, or the more brutal kinds of monist, who can as it were, just tick all the ‘No’ boxes on the form. Velmans, in fairness, has given us a helpful hint in the name of his theory: the reflexivity of his monism refers to the idea of the Universe becoming aware of itself through the medium of conscious individuals, which at least suggests where the two aspects might spring from.

Velmans brings out well what I think is the main attraction of externalism: that it eliminates the idea that the objects of perception are entirely in the head, that all we ever really experience are representations in the brain. Some of Velmans’ assaults on this idea, however, are a little dubious. Take his ‘skull’ argument: the phenomenal world extends as far as we can see – to the horizon and the dome of the sky, he suggests. Now if the phenomenal world is all inside my brain, my real, non-phenomenal skull exists beyond that world: beyond the dome of the sky. How ridiculous is that? Rhetorically, this conjures up the idea of the real skull floating in outer space, or perhaps enclosing the world like a second, bony sky. But really, in saying that the real skull is beyond the phenomenal world, we don’t mean it’s geographically or spatially a bit beyond it: we mean it’s in another world altogether; in a different mode of existence.

There’s a similarly questionable treatment of location in Velmans’ references to those materialists who would see experience as constituted by functions or patterns of neuron firing in the brain. Velmans again attributes to such people the belief that experience is actually located in the brain. They might well agree, but perhaps with some reservation: I think many or most functionalists, for example, would distinguish between a particular instantiation of a function, which certainly exists in a physical place in the brain, and the function itself, which could be run by other brains, and which exists in some Platonic realm where spatial position is irrelevant or meaningless.

The main problem for me, as with some other versions of externalism, is whether reflexive monism delivers the simplification it seems to promise or merely relocates the problem. Velmans provides diagrams which illustrate the difference between a dualist view, where the phenomenal perception floats above observer and object in a mental/spiritual world, a reductionist view where the percept is similarly dangling, and his own, where we have the object (a cat, as it happens), the observer, and nothing more than a couple of arrows. The trouble is, we still actually have the real cat and the perceived, phenomenal one: Velmans has pulled off a sly bit of legerdemain and shuffled one under the other.

Velmans spends some time expounding the idea of ‘perceptual projection’ – that the phenomenal world is projected out there into physical space – and defending himself against the charge of smearing real cats with phenomenal cat-perception stuff; but I think there is a worse difficulty. The phenomenal experience may have been projected out of our skulls, but it’s still all we get to deal with, and that seems to leave us dangerously isolated, close to the beginning of the broad and easy downward path which leads to solipsism. It’s not so much that the danger is inescapable – more that I’m left wondering whether taking all those phenomenal experiences out into the external world actually changed things all that much.

Velmans wraps up with an exposition of how his view impinges on the hard problem. In essence, he thinks that when we’ve grasped reflexive monism properly, we will see that the fact that the world has two different aspects is just one of those features which, although they are slightly mysterious, there is no need to worry about. We don’t agonise, he suggests, over the “hard problems” of physics – why does an electric current in a wire give rise to a magnetic field? Why do electrons behave like waves in some circumstances and like particles in others? Why is there any matter in the Universe at all?

Actually, I think people do agonise over those problems, as it happens. I must have spent a considerable number of shortish and frustrating periods of time wondering in vain why there was anything.

Picture: Lehar argument. Steve Lehar, who has carried on a long dialectic with Max Velmans, kindly wrote to express sympathy for some of the points above. There is a charming exposition of his views in cartoon form here.

Gestalt Isomorphism and the Primacy of Subjective Conscious Experience gives a more formal version, with a response from Velmans here and more here.