Archive for January, 2012

CrazyistEric Schwitzgebel has done a TEDx talk setting out his doctrine of crazyism. The audio is not top quality, but if you prefer there is a full exposition and a”a short, folksy version” available here.

The claim, briefly is that any successful theory of consciousness – in fact, any metaphysical theory – will have to include some core elements that are clearly crazy. “Crazy” is taken to describe any thesis which conflicts with common sense, and which we have no strong epistemic reason for accepting.

Schwitzgebel defends this thesis mainly by surveying the range of options available in philosophy of mind and pointing out that all of them – even those which set out to be pragmatic or commonsensical – imply propositions which are demonstrably crazy. I think he’s right about this; I’ve observed myself in the past that for any given theory of consciousness the strongest arguments are always the ones against: unnervingly this somehow remains true even when one theory is essentially just the negation of another theory. Schwitzgebel suggests we should never think that our own preferred view is, on balance, more likely than the combination of the alternatives – we should always give less than 50% credence to our preferred view or, if you like, never quite believe anything.

I won’t recapitulate Schwitzgebel’s case here, but it did provoke me to wonder about the issue of what we would find acceptable as an answer to the problem of consciousness. It’s certainly true that some theories would not, as it were, be crazy enough to appeal. Suppose the Blue Brain project triumphed, and delivered a brain simulated down to neuronal level. We could run the simulation and predict the brain’s behaviour; for anything the simulated person said or thought, we could give a complete neuronal specification, and in that sense a complete explanation. But it wouldn’t seem that that really answered any of the deeper questions.

Equally, for all those theories that tell us there’s really nothing to explain, our consciousness and our selfhood are just delusions generated by aspects of the mental mechanism, one problem is that the answer seems too easy (though in another sense these views are surely crazy enough). We don’t want to be told to move along, nothing to see here, folks; what we want is an “Aha!” moment, a theory that makes things suddenly fall into places where they make dramatic new sense. How do we get such moments?

I think they come from a translation or bridge that lets us see how one understood realm transfers across into another realm which is also understood but not connected. Maybe we find out that Hesperus is Phosphorus and that both are in fact the planet Venus; then the strange behaviour of the evening star and the morning star suddenly make more sense. Another related way of generating the Aha! is to discover that we have been conceptualising things wrongly: that two things we thought were separate are really aspects of the same thing, or that a thing we took to be a single phenomenon is actually two different things we have conflated together: temperature and heat, for example.

It certainly looks as if consciousness is right for an Aha! of those kinds – we have the two separate realms, the mental and the physical, all ready to go.  But Colin McGinn has argues that the very distinctness of the realms means that no explanation can ever be forthcoming, and many people since Brentano have shared the sense that no bridging or reshuffling of concepts is even conceivable. The thing is, we don’t get the kind of paradigm shift we need by labouring away within the existing framework: we need something to jolt us out of it and there’s no telling what. We know now that in order to come up with the theory that speciation occurs through differential survival of the fittest, you needed to visit the tropics and collect a lot of examples of local fauna, read Malthus and then fall ill. Darwin and Wallace both had their dogmatic slumbers shaken up in this way; but it was not evident in advance that that was what it took. Perhaps even now a young doctor who has treated schizophrenics, has had the required motorbike accident and is just about to read the text on encryption which is an essential precursors to the Theory.

I sort of hope and believe that something like that is the case and that when the Theory is available we shall see that one of those theses that look crazy now are not quite what we thought: so crazyism will turn out to be false, or at any rate only provisional. But Schwitzgebel’s essential pessimism could turn out to be justified. We could end up with a theory like quantum mechanics, which seems to do the job so far as anyone can see, but which just refuses to click with our brains positively enough for the Aha! moment.

Schwitzgebel doesn’t spend much time on the wider claim that metaphysics as a whole is crazy, but it’s an interesting possibility that the problem doesn’t really lie with philosophy of mind but something altogether deeper. Maybe we need to look away from the mind as such and spend some time on… what? Causality? Basic ontology? Once again, I have no idea.

correspondentIf you haven’t already seen it, it’s well worth watching this charmingly animated talk by Iain McGilchrist on the two hemispheres of the brain. At a brisk pace he explains how people in the past went overboard with a false and over- simplified version of what the two hemispheres do and provides a fascinating corrective. You may feel that towards the end he goes a teensy bit overboard himself in a new direction.

Also from TED is this talk in which Antonio Damasio goes in pursuit of the self – in his view an essential component of consciousness – and unexpectedly tracks it down to the brain stem.

Finally, Sergey Bulanov has kindly drawn my attention to his new website devoted to his work on developing a non-computational artificial intelligence. Sergey was originally inspired by a book of logic problems: he invented a network system for solving them and in a second phase is seeking to generalise his approach.

Thanks to Sergey and to Jesús Olmo and Ivan Savov respectively for the other links.

PongidResearchers from the Max Planck Institute and St Andrew’s University have come up with some fresh evidence that chimps have a theory of mind (ToM) – that is to say that they are aware that other individuals possess knowledge and that what they know doesn’t always match what we know.

The researchers placed dummy snakes in the path of wild chimps: the chimps gave warning calls more frequently in the presence of others who, so far as they could tell, had no prior knowledge of the presumed hazard.

This kind of research is fraught with difficulty. Morgan’s Canon tells us that we should only use consciousness as an explanation for some item of behaviour where no simpler explanation is available, and similarly we should be reluctant to grant chimps ToM unless there is no alternative.  Couldn’t the explanation be, for example, that chimps who are alone are more likely to give warning calls, either because that response is just hard-wired, or because they are more fearful when alone? Alternatively, perhaps the observed behaviour could be largely explained if chimps are programmed to give a warning call, but only one, for each member of the troupe they spot or hear approaching?

Although I think Morgan’s Canon is absolutely the right kind of principle to apply, it is difficult to satisfy, and if read too literally perhaps impossible. We know from all the discussions of philosophical zombies that there are plenty of thoughtful people who find it conceivable that all of human behaviour could be produced without consciousness (at any rate, without the kind of consciousness that requires actual phenomenal subjective experience). If that’s really true then there are surely no cases in which behaviour can strictly be explained only by consciousness. It’s equally hard, going on impossible, to rule out every conceivable alternative explanation for the chimps’ behaviour – but the researchers were well aware of the problem and the key point of the research is the observation of circumstances where, for example, chimp A could be presumed to have heard an earlier warning, but chimp B could not. So we can take the claims they make as well grounded. It seems that with some inevitable margin of doubt we can reasonably take it as established that chimps do have ToM.

So what? We might have been willing to assume that that was probably the case anyway. We already know chimps are extremely bright and there are many who believe they can develop language skills which approach human levels. Language is what makes it so much easier to know for sure that human beings have ToM – they can tell us about it – so if chimps are anywhere near that level it’s really no surprise that they also have ToM. (Interesting, by the way that the current research uses the chimps’ proto-linguistic warning calls.) One further conclusion offered by the researchers themselves is that ToM must have emerged in the primate lineage at a point before the divergence of chimp and human ancestors: but that ain’t necessarily so. It could equally be that each lineage has developed a functionally comparable capacity in parallel, one which the latest shared ancestor need never have had.

Do we and our pongid cousins have the same ToM? In some respects obviously not. For one thing, we humans really do have actual academic theories of mind; and we write novels filled with the putative contents of minds that never existed. We have ToM on levels which completely transcend the mental lives of chimps. Are these, though, just fancy overlays on an underlying ability which remains essentially the same?  Alas, there’s no easy way of telling without knowing what’s going on in the chimp’s mind – what it is like to be a chimp – and Nagel long ago told us that that was impossible.

Attempting to know the unknowable is nothing new for us, though, so let’s at least briefly try to achieve the impossible. There are lots of possibilities for what might be passing through the chimp’s mind: by way of illustration it could be any of the following.

  1. A cloudy sense of something indefinable but importantly snake-related which is missing in Chimp B.
  2. A mental picture of Chimp B continuing to advance and stumbling on the snake.
  3. A brief empathetic sense of being Chimp B, and a recollection that seeing the snake or hearing a warning has not occurred.
  4. Routine enumeration of the troupe and its whereabouts leading to a realisation that Chimp B hasn’t been around for a while.
  5. Occurence of proto-verbal content equivalent to uttering the sentence “Look there’s B, who doesn’t know about the snake yet!”

There are plenty of other possibilities: cataloguing them would in itself be a challenging task. Moreover, humans are clearly capable of operating on two or more of these levels at once, and it would be mere speciesism to assume that chimps are not. Still, can we pare it down a bit: given that chimps lack full-blown human linguistic abilities and are relatively limited in their foresight, can we plausibly hypothesise that cases like 5 and other involving relatively complex levels of abstraction are probably absent from the chimp experience? I’m not sure, and even if we can it doesn’t help all that much.

So instead I ask myself what state obtained in my own mind last time I warned someone about a potential hazard. Luckily I do remember a couple of occasions, but interestingly introspection leaves me quite uncertain about the answer. This could be a result of hazy memory, but I think it’s worse than that: I think the main problem is that so far as conscious thought goes I could have been thinking anything.  It feels as if there is no distinct single state of mind which corresponds to noticing that somebody needs to be warned about something; curiously I feel tempted to examine my own behaviour and conclude that if I did go on to warn someone, I must have been thinking that they needed warning.

That kind of approach is another option, I suppose: we can take a behaviourist tack and say that if chimps behave in a way that displays ToM, then they have it, and that’s all there is to be said about it. If we can’t formulate clearly what kind of behaviour that would be, that just means ToM itself turns out to be mentalistic nonsense.  The snag with that is that ToM is pretty certainly mentalistic nonsense to behaviourists anyway; so if we think the question is worth answering we have to look elsewhere.

We could get neuronal on this: we might, for example, be able to scan human and chimp brains and detect some distinctive patterns of activity which occur just when the relevant primate appears to be getting ready to issue a warning. If these patterns of activity occurred in the corresponding sections of the chimp and human brain (perhaps involving some of those special mirror neurons) we should be inclined to conclude that our ToMs were basically the same: if they occurred in different places we should be very tempted to conclude that evolution had recruited different sections of the two species’ brains to carry out the same function. This latter case is quite plausible – in human brains, for example, the areas used for speech don’t match the bits of the chimp brain used for vocalisations (which apparently correspond to areas used by humans only for involuntary gasps and cries and, strangely enough, for swearing).

Results like that might settle the evolutionary question; but not the deeper philosophical one. Even if we did use a different set of neurons, it wouldn’t prove we weren’t running the same ToM. Different human beings certainly use somewhat different arrays of neurons – no two brains are wired identically. If we came across the yeti and found he was fully up to human levels of consciousness, able to hold an impeccably normal human-style conversation with us and discuss ToM just as we do, and then we made the astonishing discovery that he had no prefrontal cortex and was using what in humans would have been his cerebellum to do his conscious thinking with, we would not on that account alone say he had a different kind of consciousness (at least, I don’t think we would).

So it looks to me as if we have a radical pattern of variation at both ends. All sorts of neuronal wiring (or maybe silicon or beer cans and string – why not?) will do at the bottom level; all sorts of cogitative content will do at the top levels. Somewhere in the middle is there a level of description where deciding that someone needs to be warned is just that and nothing else, and where we can meaningfully compare and contrast human and chimp?  I suspect there is, but I also suspect that it resides in something analogous to a high-level mental metacode of a kind we should need a proper theory of mind even to begin imagining.