Archive for February, 2008

Picture: crying baby.

Picture: Blandula. If ever you suspected that the ‘hard problem’ of consciousness was a recondite philosophical matter, of no significance in the real world, a recent piece in the NYT (discussed briefly by our old friend Steve Esser) should convince you otherwise.

It explains how Kanwaljeet Anand discovered that new-born babies were receiving surgery without anaesthetics, and goes on to discuss the evidence that fetal surgery, increasingly common, also causes pain responses in the unborn child.

Why would doctors be so callous? They don’t believe new-born children, let alone fetuses, feel any pain. When they operate on a new-born baby, there may be some reflexes, but there are no pain qualia – no pain is really being felt. So all they need to give in the way of drugs is a paralytic – something that makes the subject keep still during the operation, but has no effect on how it feels.

It seems to me that, although they may not be clearly aware of it, the doctors here are making important real world decisions on the basis of their philosophical beliefs about qualia. Quite bold beliefs too: they don’t believe the fetuses can be feeling pain because the brain, and in particular the cortex, is not yet wired up sufficiently for consciousness, and if you’re not conscious, you can’t have qualia.

It’s quite possible they are right, I suppose, but I think most people who have been through some of the philosophical arguments would doubt whether we can speak of these matters with such certainty, and would be inclined to err on the side of safety. To be honest, it’s a bit hard to shake the suspicion that the real reason fetuses don’t get anaesthetics is because fetuses don’t complain…

Picture: Bitbucket. I think the real issue here is slightly different. Why was Anand concerned about the babies who were coming back to him after surgery, before he even knew they weren’t being anaesthetised? Because they were traumatised. Their systems had been flooded with hormones usually associated with pain, their breathing was poor, their blood sugar was out of order. When they were given anaesthetics, instead of just paralytics, they survived the operations in better health. There were medical reasons for avoiding the pain, not just subjective ones. The problem arose out of the fact that the surgeons and anaesthetists had got used to the idea that relief of subjective pain was all that mattered. So when they were dealing with patients who they judged incapable of subjective pain, they saw no reason to anaesthetise. They were, in fact, paying too much attention to the idea of qualia, and not enough to the physical, medical events which actually constitute pain even if your cortex isn’t working. (The passage in the NYT piece about people with hydranencephaly is a dreadful red herring, by the way: it just isn’t true that they have no cortex.)


Picture: Blandula. True, there were other reasons for using anaesthetics in that particular case. But where do the doctors get their certainty that no pain is being felt? You can’t talk about your qualia if your cortex isn’t working, but since qualia are an utter mystery and separable in principle from the physical operation of the brain altogether, there’s no strong reason to think you’re not feeling them. We take winces and grimaces as good indicators of pain in normal people: how can we be sure it’s any different in fetuses? Is it just the talking that matters? Is an unreportable pain no pain at all? Surely not.

Picture: Bitbucket. You say qualia are separable from the physical operation of the brain, but you don’t really believe that in any important way. You know quite well that brain events like the firing of certain neurons are what cause these sensations you misdescribe as ineffable qualia. If I hit you with a physical spade, you’ll feel pain qualia alright, however ’separable’ you think they are.

Let’s get a bit more philosophical here. Why is pain bad? Why is it to be avoided and minimised? There are several traditional answers — I’d say the following just about cover it.

  • It just is. Pain is just bad in its essential nature.
  • The moral rules agreed by our society enjoin us not to cause pain.
  • Ethics requires us to seek the greatest possible balance of pleasure over pain.
  • You wouldn’t want pain inflicted on you, so don’t inflict it on other people.
  • Witnessing or hearing about pain makes me feel bad.
  • Carelessness about pain is the beginning of a general callousness which might undermine our concern for others, without which we’re doomed.

The first doesn’t mean anything, if you ask me. The second might be right, but the established rules, judging by medical practice, seem to say fetal pain is something we don’t have to worry about. Social rules are negotiable, anyway, so there’s no final answer there. The third one, like the first one, just assumes pain is bad. The fourth is OK, but begs the question so far as fetuses are concerned, because we don’t know what I’d want done to me if I were a fetus. I might just want them to get on with the operation, unbothered by the apparent ‘pain’ I wasn’t actually feeling. Numbers five and six, if you ask me, get close to the real motivation here.

Got that? OK, now let’s revert to your question — why do the doctors behave like this, why do they torture babies? It’s not just, I suggest, that they don’t believe in fetal pain. The more crucial point is that they know fetuses don’t remember pain. There was a time, in the not-too-distant past, when some children, not just babies, were merely given curare before being operated on. Just like the new-born babies here, it paralysed them so the surgeon could get on with the job, but did nothing whatever to relieve the pain. Did the doctors think the children didn’t feel the pain? Well, there’s some talk about them thinking curare was aneasthetic, but that’s balderdash — they didn’t use it for adults. No, I believe they knew quite well the children were in pain, they just thought it didn’t matter because children don’t remember these things. Give ‘em an ice cream, they thought, and we’ll hear no more about it. (I’m not saying that’s necessarily correct, by the way).

In essence, the doctors implicitly agreed with me. There were no deep metaphysical or ethical reasons to avoid pain, and reciprocity never really worked, because there were always relevant differences between you and the person suffering the pain (unless you were fighting your twin brother, perhaps). You could always say; yeah, do as you would be done by, but if I were in the state that person’s in, I would want it done to me. So, the doctors rightly concluded, there were only two fundamental reasons to avoid causing pain; first, it upset people. As a result, our social rules were generally set to minimise it, and so second, the unnecessary infliction of pain would tend to have bad social effects, undermining the rules and generally risking a withdrawal of social consent. Pain which will never be remembered cannot upset us or have bad social consequences — so it doesn’t matter.

There’s more evidence that this is the established medical view. Besides paralysing and anaesthetising patients, doctors use drugs which specifically remove the memory. In some cases, drugs which remove the memory have been used instead of anaesthetics, just because in medical eyes, pain you don’t remember is the same as pain that never happened. It’s perfectly normal contemporary practice to use a mixture of true anaesthestics and amnestics.

So, to sum up, pain has three aspects: medical (the hormonal reaction, the changes in the body), psychological (we don’t like thinking about it), and social (we’ve agreed to outlaw pain, and erosion of that rule undermines society generally). And that’s all there is. The medical, psychological, and social aspects add up to what pain is: there’s no mystical component, no qualia.

Picture: Blandula. That just seems mad to me – bonkers and almost diabolical. Surely you can see that the reason pain is bad is because it hurts? That’s all pain is – hurting.

You and your supposed doctor friends are a bit over-confident about your amnesia anyway, aren’t you? The NYT article reports evidence that pain experienced early on affects a child’s responses later. Do you really feel confident that agonising episodes early on — even in the womb — are not lurking in some damaging form in the subconscious of the child, or even the adult?

Picture: Fat Controller.

Asim Roy insists (Connectionism, controllers and a brain theory) that there are controllers in the brain. This is not as sinister as it might sound.

Roy presents his views as an attack on connectionist orthodoxy. Connectionists, he says, believe that the brain does not have in it groups of cells that control other parts of the brain. He cites many sources, and quotes explicitly from Rumelhart, Hinton, and McClelland, who say

“There is one final aspect of our models which is vaguely derived from our understanding of brain functioning. This is the notion that there is no central executive overseeing the general flow of processing…”

Roy begins by addressing a startlingly radical argument against the notion of controllers in any system. He takes up the analogy of a human being driving a car. The steering and other systems, according to a normal understanding, are controlled by the driver; but, he says, there is an argument that this is the wrong way of looking at it. It’s true that the steering is guided by input from the driver, but there is also feedback to the driver from the various systems of the car, which also determine what the driver does. The current position of the wheel and the response of the car determine how much force the driver applies to the wheel. So really the car is controlling the driver as well as vice versa, and the very idea of a controller within a system is a misapprehension.

This is obviously a slightly silly argument, but it serves to show that we need to define the notion of a controller properly. Roy suggests that the essence of a controller is that it is not dependent on inputs. The steering moves only in response to my inputs, whereas if I choose (and am tired of life, presumably) I can ignore any feedback from the car and turn the wheel any arbitrary number of degrees I settle on. Similarly, although my channel-hopping is normally guided by what appears on the TV screen, if I wish I can choose to change channels arbitrarily, or tap out a rhythm on the remote control which has nothing to do with the TV. A controller, in short can operate in different modes while a subservient system cannot.Armed with this definition, Roy argues that a connectionist network which learns by back-propagation requires an external agent to set the parameters, whether it be a human operator or another module within the overall system. I suppose it could be retorted that this controller is indeed external, and exercises its influence only during learning, but Roy would probably say that if we’re modelling the human brain, these controllers would have to be taken to be part of the neural set-up. In any case, he goes on to give examples from neuroscience to show that some parts of the brain do indeed seem to operate as controllers of some other parts. I must say that this claim seems so evidently true to me that argument in its favour seems almost redundant.

Roy’s conclusion is that his reasoning opens the way to a better approach, where instead of being left to local learning, the methods and weightings to be used can be dictated by a central system, at least on some occasions.

I think Roy’s conclusion that there must be controllers within the brain is hard to disagree with: the question is more whether he’s demolishing a straw man. What did Rumelhart, Hinton, and McClelland actually mean? I suspect they meant to deny the existence of a central ‘homunculus’. a little man in the brain who does all the real work; and also to deny that the brain has a CPU, a place where all the data and instructions get matched together and processed. I don’t really think they meant to deny that any part of the brain ever controls any other part. I’m not sure that connectionists have ever reached the point of proposing an overall architecture for the brain, or even that that would be within the scope of the theory; rather, they just want to investigate a way of working which may be characteristic of parts of the brain.

I can imagine two ways connectionists might respond to Roy’s claims without directly contesting them. One would be to accept that the control function he describes exists, but claim that it doesn’t reside in one fixed place. Different parts of the overall network might be in control at different times; it’s not that bits of the brain don’t control other parts, merely that control is sort of smeared around. But equally I can imagine a connectionist simply saying; of course we never meant to deny that that sort of control relation exists, so thanks for the clarification…

I think Roy’s attempt to define what a controller does is interesting, however. A controller, on his view, can follow the inputs, or operate without them. But surely other systems can operate without inputs, too? If I black out and cease to provide the car with control inputs, it doesn’t cease to function. It may function disastrously, but that’s also true if I exercise my controller’s right to ignore inputs and take a kind of existentialist approach to steering. You could say that in cases like the black-out one the controlled system is continuing to receive inputs – they’re just consistently zero. The point really is not operating without inputs, but being able to ignore the ones you’re getting. I think what we’re grasping for here is that the inputs to a controller don’t determine the outputs. That’s interesting because it sounds like one version of free will. When I act freely, my actions were not determined by the environmental inputs. But it’s notoriously hard to explain how that could be so in a deterministic world.

If you’re a softy compatibilist about free will, you may be inclined to argue that it is to some extent a matter of degree. Where input A always gets output B, no question of freedom or controllerhood arises. But if there is a complex internal algorithm working away, such that input A may get any letter of the alphabet on different occasions, things start to look different. And if the outputs begin to show a certain kind of coherence or meaningful salience – if they begin to spell intelligible words – we might be inclined to say that the system is in some sense in control. If when we turn the wheel, the car does not respond, we might gasp metaphorically “The damn thing’s got a will of it’s own!”; further, if it actually directs itself down a side road and into the filling station we might seriously and literally begin to credit the car with intelligence. So far so Dennettian.

If that line of reasoning is right, controllerhood is indeed a tricky business: it might be that the only way to know whether some group of cells is a controller would be to watch it and see whether it did controllery things. And if that’s the case, maybe smeary connectionism has something in it after all…