Archive for November, 2010

Picture: Octopus. Peter Godfrey-Smith is a philosophy professor who has spent some time observing octopus behaviour, so it’s only natural that he should start to wonder about octopus minds. The Harvard Gazette reports some of his speculations: perhaps animal minds lack the cohesiveness of their human equivalents. Perhaps in an octopus, going a little further, we see intelligence without a unified self.

Why would we think that?  The octopus brain is relatively large, but it is organised in a way utterly unlike ours: in particular it has large ganglia in its arms which together contain more neurons than the central group which we naturally speak of as the brain.  There’s some evidence that an octopus can be in two (or nine) minds about what to do, with some arms ‘wanting’ to hide while others ‘want’ to venture out after food. Perhaps its inner experience is akin to the inner experience of being a committee – whatever that’s like?

It would of course be rash to draw any conclusions on the basis of physiology alone. Does the location of neurons matter that much? If we surgically altered an octopus so that the outlying ganglia were adjacent to the central brain, without changing the layout of the neural connections, would that make any difference to the way it thought? If we did some similar surgery on a human being and split off some bits of the cortex while stretching the neurons and keeping the connections intact, would that suddenly give them committee consciousness? It seems unlikely.

In one obvious way the human brain is actually more divided than that of the octopus; our is split in two down the middle , while theirs is centrally united (a glance at the layout of an octopus brain shows what a radically different design it follows). Does this give us dual consciousness? Some might say it did up to a point; talk of left- and right-brain thinking has become quite popular, if not always well-based in science. Famously, moreover, when the link between the hemispheres of the human brain is cut, some divided behaviour can be evoked: a left hand able to indicate what ‘it’s’ eye can see while the right hand has no idea. But even when the connection has been cut, patients behave quite normally in ordinary life and do not report any sense of division: it takes very specific experimental circumstances to bring out specific peculiarities. This is obviously a good thing for the patients, and it’s also good for humans generally that the divided shape of the brain doesn’t lead to any equivocation in our normal responses.

That surely is something that evolution would guarantee: we think of the self as something abstract or even spiritual, but it rests on the solid fact of a single united organism. Any animal which was really ‘in two minds’ for very long over serious decisions would suffer the kind of disadvantage which would surely get it weeded out. That seems another reason to doubt whether the octopus sense of self can really be all that divided: it just wouldn’t be practical.

While we’re on practical, evolutionary considerations, we might ask ourselves if there’s some simpler reason for the octopus having ‘brains in its arms’. What a nervous system does is control and co-ordinate things, and for speed and economy it seems likely that the best design is always going to involve centralisation of the more complex neural operations. Nevertheless, even in humans not all operations are conducted centrally. Although in general it pays to route action decisions through the centre, there is a small price to be paid in terms of speed, and where short response times are crucial and the operation is relatively simple, it’s better to do it locally. This is why reflexes don’t trouble your brain: they’re wired up to happen automatically and instantly on the basis of local neurons. Could it be that octopus legs feature large ganglia for similar reasons of speed, and need larger collections of neurons simply because the task of orchestrating the movements of their tentacles is inherently more complex than the job of twitching a mechanically simple human arm? (It’s not just that controlling a tentacle is more complex than controlling a jointed arm:  octopuses also have the ability to change the pattern of their skin rapidly, another task which surely uses up a significant amount of processing power.)

It has been shown that the ‘tentacle brains’ are indeed capable of operating basic tentacle behaviour without input from the brain, and it looks as though the octopus design delegates these control functions.  In humans this kind of operation – controlling the sequence of muscle contractions required for you to walk along, for example – are just the sort of thing that drops out of consciousness; so it seems probable that the octopus’s leg brains, however large, have no role in any higher mental functions it may have.  I’m afraid it isn’t all that likely that these higher functions are actually very advanced: though the octopus brain is large for an invertebrate, in proportion to its body it’s smaller than those of mammals or birds.  At the risk of being rude, the remarkable proportions could be as much a matter of a small central brain as large leg brains. Perhaps, we can speculate, if some future cephalopod did indeed attain human-level consciousness, it would turn out to have so large a central brain that the ganglia in its tentacles no longer seemed quite so remarkable.

Picture: Chalmers. The Conscious Mind was something of a blockbuster, as serious philosophical works go, so a big new book from David Chalmers is undoubtedly an event.  Anyone who might have been hoping for a recantation of his earlier views, or a radical new direction, will be disappointed – Chalmers himself says he is a little less enthusiastic about epiphenomenalism and a little more about a central place for intentionality, and that’s about it. The Character of Consciousness is partly a consolidation, bringing together pieces published separately over the last few years; but the restatement does also show how his views have developed, broadening into new areas while clarifying and reinforcing others.

What are those views? Chalmers begins by setting out again the Hard Problem (a term with which his name will forever be associated) of explaining phenomenal experience – why is it that ‘there is something it is like’ to experience colours, sound, anything? The key point is that experience is simply not amenable to the kind of reductive explanation which science has applied elsewhere; we’re not dealing with functions or capacities, so reduction can gain no traction. Chalmers notes – justly, I’m afraid – that many accounts which offer to explain the problem actually go on to consider one or other of the simpler problems instead (more contentiously he quotes the theories of Crick and Koch, and Bernard Baars, as examples). In this initial exposition Chalmers avoids quoting the picturesque thought experiments which are usually used, but the result is clear and readable; if you never read The Conscious Mind I think you could perhaps start here instead.

He is not, of course, content to leave subjective experience an insoluble mystery and offers a programme of investigation which (to drastically over-simplify) relies on some basic correspondences between the kind of awareness which is amenable to scientific investigation and the experience which isn’t. Getting at consciousness this way naturally tends to tell us about the aspects which relate to awareness rather than the inner nature of consciousness itself: on that, Chalmers tentatively offers the idea that it might be a second aspect of information (in roughly the sense defined by Claude Shannon).  I’m a little wary of information in this sense having a big metaphysical role – for what it’s worth I believe Shannon himself didn’t like his work being built on in this direction.

The next few chapters, following up on the project of investigating ineffable consciousness through its effable counterparts, deal with the much-discussed search for the neural correlates of consciousness (NCC). It’s a careful and not excessively over-optimistic account. While some simple correspondences between neural activity and specific one-off experiences have long been well evidenced,  I’m pessimistic myself about the possibility of NCCs in any general, useful form.  I doubt whether we would get all that much out of  a search for the alphabetic correlates of narrative, though we know that the alphabet is in some sense all you need, and the case of neurons and consciousness is surely no easier. Chalmers rightly suggests we need principles of interpretation: but once we’ve stopped talking about a decoding and are talking about an interpretation instead, mightn’t the essential point have slipped through our fingers?

The next step takes us on to ontology.  In Chalmers’ view, the epistemic gap, the fact that knowledge about the physics does not entail knowledge of the phenomenal, is a sign that that there is a real, ontological gap, too. Materialism is not enough: phenomenal experience shows that there’s more. He now gives us a fuller account of the arguments in favour of qualia, the items of phenomenal experience, being a real problem for materialism, and  categorises the positions typically taken (other views are of course possible).

  • Type A Materialism denies the epistemic gap: all this stuff about phenomenal experience is so much nonsense.
  • Type B Materialism accepts the epistemic gap, but thinks it can be dealt with within a materialist framework.
  • Type C Materialism sees the epistemic gap as a grave problem, but holds that in the limit, when we understand things better, we’ll understand how it can be reconciled with materialism.

In the other camp we have non-materialist views.

  • Type D dualism puts phenomenal experience outside the physical world, but gives it the power to influence material things,
  • Type E Dualism,  epiphenomenalism, also puts phenomenal experience outside the physical world, but denies that it can affect material things: it is a kind of passenger.

Finally we have the option that Chalmers appears to prefer:

  • Type F monism (not labelled as a materialism, you notice, though arguably it is). This is the view that consciousness is constituted by the intrinsic properties of physical entities: Chalmers suggests it might be called Russellian monism.

The point, as I understand it, is that we normally only deal with the external, ‘visible’ aspects of physical things: perhaps phenomenal experience is what they are intrinsically like in themselves – inside, as it were. I like this idea, though I suspect I come at it from the opposite direction: to Chalmers, it seems to mean something like those experiences you’re having – well, they’re the kind of thing that constitutes reality whereas to me it’s more you know reality – well that’s what you’re actually experiencing.  Chalmers’ way of looking at it has the advantage of leaving him positioned to investigate consciousness by proxy, whereas I must admit that my point of view tends to leave me with no way into the question of  what intrinsic reality is and makes mysterian scepticism (which I don’t like any more than Chalmers) look regrettably plausible.

Now Chalmers expounds the two-dimensional argument by which he sets considerable store. This is an argument intended to help us get  from an epistemic gap to an ontological one by invoking two-dimensional semantics and more sophisticated conceptions of possibility and conceivability.  It is as technical as that last sentence may have suggested. To illustrate its effects, Chalmers concentrates on the conceivability argument: this is basically the point often dramatised with zombies, namely that we can conceive of a world, or people, identical to the ones we’re used to in all physical respects but completely without phenomenal experience. This shows that there is something over and above the physical account, so materialism is false.  One rejoinder to this argument might be that the world is under no obligations to conform with our notions of what is conceivable; Chalmers, by distinguishing forms of conceivability and of possibility, and drawing out the relations between them, wants to say that in certain respects it is so obliged, so that either materialism is false or Russellian monism is true.  (Lack of space – and let’s be honest, brains – prevents me from giving a better account of the argument at the moment.)

Up to this point the book maintains a pretty good overall coherence, although Chalmers explicitly suggests that reading it straight through is only one approach and unlikely to be the best for most readers; from here on in it becomes more clearly an anthology of related pieces.

Chalmers gives us a new version of Mary the Colour Scientist (no constraint about the old favourites in this part of the book) in Inverted Mary. When original Mary sees a tomato for the first time she discovers that it causes the phenomenal experience of redness: when inverted Mary sees a tomato (we must assume that it is the same one, not a less ripe version) she discovers that it causes the phenomenal experience of greenness.  This and similar arguments have the alarming implication that the ineffability of qualia, of phenomenal experience, cannot be ring-fenced: it spills over at least into the intentionality of Mary’s knowledge and beliefs, and in fact evidently into a great deal of what we think, say and believe.  This looks worrying, but on reflection I’m not sure it’s such big news as it seems; it’s inherent in the whole problem of qualia that when we both look at a tomato I have no way of being sure that what you experience – and refer to – as red is the same as the thing I’m talking about. More comfortingly Chalmers goes on to defend a certain variety of infallibility for direct phenomenal beliefs.

Further chapters provide more evidence of Chalmers’ greater interest in intentionality: he reviews several forms of representationalism, the view that phenomenal experience has some intentional character (that is, it’s about or indicates something) and defends a narrow variety. He offers us a new version of the Garden of Eden, here pressed into service as a place where our experiences are direct and perfectly veridical. Chalmers uses the notion of Edenic content as a tool to break apart the constituents of experience; in fact, he seems eventually to convince himself that Edenic content is not only possible but fundamental, possibly the basis of perceptual experience. It’s an interesting idea.

Included here too is a nice piece on the metaphysics of the Matrix (the film, that is).  Chalmers entertainingly (and surely rightly) argues that the proposition that we are living in a matrix, a virtual reality world, is not sceptical, but metaphysical. It’s not, in fact, that we disbelieve in the world of the matrix, rather that we entertain some hypotheses about its ontological underpinnings. Even bits are things.

The book rounds things off with an attempt (co-authored with Tim Bayne) to sort out some of the issues surrounding the unity of consciousness, distinguishing access and phenomenal unity along the lines of Ned Block’s distinction between access and phenomenal consciousness, and upholding the necessity of phenomenal unity at least.

It’s a good, helpful book; what the content lacks in novelty it makes up in clarity. Chalmers has a persuasive style, and his expositions come across as moderate and sensible (perhaps the reduced epiphenomenalism helps a bit). It’s surprising that the denial of materialism (surely the dominant view of our time) can seem so common sense.