Posts tagged ‘brain’

Picture: Octopus. Peter Godfrey-Smith is a philosophy professor who has spent some time observing octopus behaviour, so it’s only natural that he should start to wonder about octopus minds. The Harvard Gazette reports some of his speculations: perhaps animal minds lack the cohesiveness of their human equivalents. Perhaps in an octopus, going a little further, we see intelligence without a unified self.

Why would we think that?  The octopus brain is relatively large, but it is organised in a way utterly unlike ours: in particular it has large ganglia in its arms which together contain more neurons than the central group which we naturally speak of as the brain.  There’s some evidence that an octopus can be in two (or nine) minds about what to do, with some arms ‘wanting’ to hide while others ‘want’ to venture out after food. Perhaps its inner experience is akin to the inner experience of being a committee – whatever that’s like?

It would of course be rash to draw any conclusions on the basis of physiology alone. Does the location of neurons matter that much? If we surgically altered an octopus so that the outlying ganglia were adjacent to the central brain, without changing the layout of the neural connections, would that make any difference to the way it thought? If we did some similar surgery on a human being and split off some bits of the cortex while stretching the neurons and keeping the connections intact, would that suddenly give them committee consciousness? It seems unlikely.

In one obvious way the human brain is actually more divided than that of the octopus; our is split in two down the middle , while theirs is centrally united (a glance at the layout of an octopus brain shows what a radically different design it follows). Does this give us dual consciousness? Some might say it did up to a point; talk of left- and right-brain thinking has become quite popular, if not always well-based in science. Famously, moreover, when the link between the hemispheres of the human brain is cut, some divided behaviour can be evoked: a left hand able to indicate what ‘it’s’ eye can see while the right hand has no idea. But even when the connection has been cut, patients behave quite normally in ordinary life and do not report any sense of division: it takes very specific experimental circumstances to bring out specific peculiarities. This is obviously a good thing for the patients, and it’s also good for humans generally that the divided shape of the brain doesn’t lead to any equivocation in our normal responses.

That surely is something that evolution would guarantee: we think of the self as something abstract or even spiritual, but it rests on the solid fact of a single united organism. Any animal which was really ‘in two minds’ for very long over serious decisions would suffer the kind of disadvantage which would surely get it weeded out. That seems another reason to doubt whether the octopus sense of self can really be all that divided: it just wouldn’t be practical.

While we’re on practical, evolutionary considerations, we might ask ourselves if there’s some simpler reason for the octopus having ‘brains in its arms’. What a nervous system does is control and co-ordinate things, and for speed and economy it seems likely that the best design is always going to involve centralisation of the more complex neural operations. Nevertheless, even in humans not all operations are conducted centrally. Although in general it pays to route action decisions through the centre, there is a small price to be paid in terms of speed, and where short response times are crucial and the operation is relatively simple, it’s better to do it locally. This is why reflexes don’t trouble your brain: they’re wired up to happen automatically and instantly on the basis of local neurons. Could it be that octopus legs feature large ganglia for similar reasons of speed, and need larger collections of neurons simply because the task of orchestrating the movements of their tentacles is inherently more complex than the job of twitching a mechanically simple human arm? (It’s not just that controlling a tentacle is more complex than controlling a jointed arm:  octopuses also have the ability to change the pattern of their skin rapidly, another task which surely uses up a significant amount of processing power.)

It has been shown that the ‘tentacle brains’ are indeed capable of operating basic tentacle behaviour without input from the brain, and it looks as though the octopus design delegates these control functions.  In humans this kind of operation – controlling the sequence of muscle contractions required for you to walk along, for example – are just the sort of thing that drops out of consciousness; so it seems probable that the octopus’s leg brains, however large, have no role in any higher mental functions it may have.  I’m afraid it isn’t all that likely that these higher functions are actually very advanced: though the octopus brain is large for an invertebrate, in proportion to its body it’s smaller than those of mammals or birds.  At the risk of being rude, the remarkable proportions could be as much a matter of a small central brain as large leg brains. Perhaps, we can speculate, if some future cephalopod did indeed attain human-level consciousness, it would turn out to have so large a central brain that the ganglia in its tentacles no longer seemed quite so remarkable.

Picture: Peter Hacker. Peter Hacker made a surprising impact with his recent interview in the TPM, which was reported and discussed in a number of other places.  Not that his views aren’t of interest; and the trenchant terms in which he expressed them probably did no harm: but he seemed mainly to be recapitulating the views he and Max Bennett set out in 2003;  notably the accusation that the study of consciousness is plagued by the ‘mereological fallacy’ of taking a part for the whole and ascribing to the brain alone the powers of thought, belief, etc, which are properly ascribed only to whole people.

There’s certainly something in Hacker’s criticism, at least so far as popular science reporting goes. I’ve lost count of the number of times I’ve read newspaper articles that explain in breathless tones the latest discovery: that learning, or perception, or thought are really changes in the brain!  Let’s be fair: the relationship between physical brain and abstract mind has not exactly been free of deep philosophical problems over the centuries. But the point that the mind is what the brain does, that the relationship is roughly akin to the relationship between digestion and gut, or between website and screen, surely ought not to trouble anyone too much?

You could say that in a way Bennett and Hacker have been vindicated since 2003 by the growth of the ‘extended mind’ school of thought. Although it isn’t exactly what they were talking about, it does suggest a growing acknowledgement that too narrow a focus on the brain is unhelpful. I think some of the same counter-arguments also apply. If we have a brain in a VAT, functioning as normally as possible in such strange circumstances, are we going to say it isn’t thinking?  If we take the case of Jean-Dominique Bauby, trapped in a non-functioning body, but still able to painstakingly dictate a book about his experience,  can’t we properly claim that his brain was doing the writing? No doubt Hacker would respond by asking whether we are saying that Bauby had become a mere brain? That he wasn’t a person any more? Although his body might have ceased to function fully he still surely had the history and capacities of a person rather than simply those of a brain.

The other leading point which emerges in the interview is a robust scepticism about qualia.  Nagel in particular comes in for some stick, and the phrase ‘there is something it is like’ often invoked in support of qualia, is given a bit of a drubbing. If you interpret the phrase as literally invoking a comparison, it is indeed profoundly obscure; on the other hand we are dealing with the ineffable here, so some inscrutability is to be expected. Perhaps we ought to concede that most people readily understand what it is that Nagel and others are getting at.  I quite enjoyed the drubbing, but the issue can’t be dismissed quite as easily as that.

From the account given in the interview (and I have the impression that this is typical of the way he portrays it) you would think that Hacker was alone in his views, but of course he isn’t. On the substance of his views, you might expect him to weigh in with some strong support for Dennett; but this is far from the case.  Dennett is too much of a brainsian in Hacker’s view for his ideas to be anything other than incoherent.  It’s well worth reading Dennett’s own exasperated response (pdf), where he sets out the areas of agreement before wearily explaining that he knows, and has always said, that care needs to be taken in attributing mental states to the brain; but given due care it’s a useful and harmless way of speaking.

John Searle also responded to Bennett and Hacker’s book and, restrained by no ties of underlying sympathy, dismissed their claims completely. Conscious states exist in the brain, he asserted: Hacker got this stuff from misunderstanding Wittgenstein, who says that observable behaviour (which only a whole person can provide) is a criterion for playing the language game, but never said that observable behaviour was a criterion for conscious experience.  Bennett and Hacker confuse the criterial basis for the application of mental concepts with the mental states themselves. Not only that, they haven’t even got their mereology right: they’re talking about mistaking the part for the whole, but the brain isn’t a part of a person, it’s a part of a body.

Hacker clearly hasn’t given up, and it will be interesting to see the results of his current ‘huge project, this time on human nature’.