iCub iTalk?

Picture: iCub. More about babies – of a sort. You may have seen reports that Plymouth University, with support from many other institutions, has won the opportunity to teach a baby robot to speak. The robot in question is the iCub, and the project is part of Italk, funded by the EU under its Seventh Framework Programme, to the tune of £4.7 million (you can’t help wondering whether it wouldn’t have been value for money to have slipped Steve Grand half a million while they were at it…).

The gist of it seems to be that next year the people at Plymouth will get the iCub to engage in various dull activities like block-stacking (a perennial with both babies and AI) and try to introduce speech communication about the tasks. It is meant to be learning in a way far closer to the typical human experience than anything attempted before. Unfortunately, I haven’t been able to find any clear statement of how they expect the language skills to work, though there is quite a lot of technical detail available about the iCub. This is evidently a pretty splendid piece of kit, although the current model has a mask for a face, which means none of the potent interactive procedures which depend on facial expression, as explored by Cynthia Brezeal and others, will be available. This is a shame, since real babies do use face recognition and smiles to get more feedback out of adults.

In one respect the project has an impeccable background, since Alan Turing, in the famous paper which arguably gave rise to modern Artificial Intelligence, speculated that a thinking robot might have to begin as a baby and be educated. But it seems a tremendously ambitious undertaking. If we are to believe Chomsky, the human language acquisition device is built in – it has to be, since human babies get such poor input, with few corrections and limited samples, yet learn at a fantastic speed. They just don’t get enough information about the language around them to be able to reverse-engineer its rules; so they must in fact simply be customising the setting of their language machine and filling up its vocabulary stores. The structures of real-world languages arguably support this point of view, since the variations in grammar seem to fall within certain limited options, rather than exploiting the full range of logical possibilities. If this is all true, then a robot which doesn’t have a built-in language facility is not going to get very far with talking just by playing with some toys.Of course Chomsky is not, as it were, the only game in town: a more recent school of thought says that by treating language as a formal code, and assuming that babies have to learn the rules before they can work out what people mean, Chomsky puts the cart before the horse; actually, it’s because babies can see what people mean that they can crack the code of grammar so efficiently.

That’s a more congenial point of view for the current project, I imagine, but it raises another question. On this view, babies are not using an innate language module, but they are using an innate ability to understand, to get people’s drift. I don’t think the Plymouth team have worked out a way of building understanding in beforehand (that would be a feat well worth trumpeting in its own right), so is it their expectation that the iCub will acquire understanding through training? Or are their aims somewhat lower?

It seems a key question to me: if they want the robot to understand what it’s saying, they’re aiming high and it would be good to know what the basis for their optimism is (and how they’re going to demonstrate the achievement). If not, if they’re merely aiming for a basic level of performance without worrying about understanding (and the selection of experiments does rather point in that direction), the project seems a bit underwhelming. Would this be any different, fundamentally, from what Terry Winograd was doing, getting on for forty years ago (albeit with SHRDLU, a less charismatic robot)?

Baby pains

Picture: crying baby.

Picture: Blandula. If ever you suspected that the ‘hard problem’ of consciousness was a recondite philosophical matter, of no significance in the real world, a recent piece in the NYT (discussed briefly by our old friend Steve Esser) should convince you otherwise.

It explains how Kanwaljeet Anand discovered that new-born babies were receiving surgery without anaesthetics, and goes on to discuss the evidence that fetal surgery, increasingly common, also causes pain responses in the unborn child.

Why would doctors be so callous? They don’t believe new-born children, let alone fetuses, feel any pain. When they operate on a new-born baby, there may be some reflexes, but there are no pain qualia – no pain is really being felt. So all they need to give in the way of drugs is a paralytic – something that makes the subject keep still during the operation, but has no effect on how it feels.

It seems to me that, although they may not be clearly aware of it, the doctors here are making important real world decisions on the basis of their philosophical beliefs about qualia. Quite bold beliefs too: they don’t believe the fetuses can be feeling pain because the brain, and in particular the cortex, is not yet wired up sufficiently for consciousness, and if you’re not conscious, you can’t have qualia.

It’s quite possible they are right, I suppose, but I think most people who have been through some of the philosophical arguments would doubt whether we can speak of these matters with such certainty, and would be inclined to err on the side of safety. To be honest, it’s a bit hard to shake the suspicion that the real reason fetuses don’t get anaesthetics is because fetuses don’t complain…

Picture: Bitbucket. I think the real issue here is slightly different. Why was Anand concerned about the babies who were coming back to him after surgery, before he even knew they weren’t being anaesthetised? Because they were traumatised. Their systems had been flooded with hormones usually associated with pain, their breathing was poor, their blood sugar was out of order. When they were given anaesthetics, instead of just paralytics, they survived the operations in better health. There were medical reasons for avoiding the pain, not just subjective ones. The problem arose out of the fact that the surgeons and anaesthetists had got used to the idea that relief of subjective pain was all that mattered. So when they were dealing with patients who they judged incapable of subjective pain, they saw no reason to anaesthetise. They were, in fact, paying too much attention to the idea of qualia, and not enough to the physical, medical events which actually constitute pain even if your cortex isn’t working. (The passage in the NYT piece about people with hydranencephaly is a dreadful red herring, by the way: it just isn’t true that they have no cortex.)


Picture: Blandula. True, there were other reasons for using anaesthetics in that particular case. But where do the doctors get their certainty that no pain is being felt? You can’t talk about your qualia if your cortex isn’t working, but since qualia are an utter mystery and separable in principle from the physical operation of the brain altogether, there’s no strong reason to think you’re not feeling them. We take winces and grimaces as good indicators of pain in normal people: how can we be sure it’s any different in fetuses? Is it just the talking that matters? Is an unreportable pain no pain at all? Surely not.

Picture: Bitbucket. You say qualia are separable from the physical operation of the brain, but you don’t really believe that in any important way. You know quite well that brain events like the firing of certain neurons are what cause these sensations you misdescribe as ineffable qualia. If I hit you with a physical spade, you’ll feel pain qualia alright, however ’separable’ you think they are.

Let’s get a bit more philosophical here. Why is pain bad? Why is it to be avoided and minimised? There are several traditional answers — I’d say the following just about cover it.

  • It just is. Pain is just bad in its essential nature.
  • The moral rules agreed by our society enjoin us not to cause pain.
  • Ethics requires us to seek the greatest possible balance of pleasure over pain.
  • You wouldn’t want pain inflicted on you, so don’t inflict it on other people.
  • Witnessing or hearing about pain makes me feel bad.
  • Carelessness about pain is the beginning of a general callousness which might undermine our concern for others, without which we’re doomed.

The first doesn’t mean anything, if you ask me. The second might be right, but the established rules, judging by medical practice, seem to say fetal pain is something we don’t have to worry about. Social rules are negotiable, anyway, so there’s no final answer there. The third one, like the first one, just assumes pain is bad. The fourth is OK, but begs the question so far as fetuses are concerned, because we don’t know what I’d want done to me if I were a fetus. I might just want them to get on with the operation, unbothered by the apparent ‘pain’ I wasn’t actually feeling. Numbers five and six, if you ask me, get close to the real motivation here.

Got that? OK, now let’s revert to your question — why do the doctors behave like this, why do they torture babies? It’s not just, I suggest, that they don’t believe in fetal pain. The more crucial point is that they know fetuses don’t remember pain. There was a time, in the not-too-distant past, when some children, not just babies, were merely given curare before being operated on. Just like the new-born babies here, it paralysed them so the surgeon could get on with the job, but did nothing whatever to relieve the pain. Did the doctors think the children didn’t feel the pain? Well, there’s some talk about them thinking curare was aneasthetic, but that’s balderdash — they didn’t use it for adults. No, I believe they knew quite well the children were in pain, they just thought it didn’t matter because children don’t remember these things. Give ‘em an ice cream, they thought, and we’ll hear no more about it. (I’m not saying that’s necessarily correct, by the way).

In essence, the doctors implicitly agreed with me. There were no deep metaphysical or ethical reasons to avoid pain, and reciprocity never really worked, because there were always relevant differences between you and the person suffering the pain (unless you were fighting your twin brother, perhaps). You could always say; yeah, do as you would be done by, but if I were in the state that person’s in, I would want it done to me. So, the doctors rightly concluded, there were only two fundamental reasons to avoid causing pain; first, it upset people. As a result, our social rules were generally set to minimise it, and so second, the unnecessary infliction of pain would tend to have bad social effects, undermining the rules and generally risking a withdrawal of social consent. Pain which will never be remembered cannot upset us or have bad social consequences — so it doesn’t matter.

There’s more evidence that this is the established medical view. Besides paralysing and anaesthetising patients, doctors use drugs which specifically remove the memory. In some cases, drugs which remove the memory have been used instead of anaesthetics, just because in medical eyes, pain you don’t remember is the same as pain that never happened. It’s perfectly normal contemporary practice to use a mixture of true anaesthestics and amnestics.

So, to sum up, pain has three aspects: medical (the hormonal reaction, the changes in the body), psychological (we don’t like thinking about it), and social (we’ve agreed to outlaw pain, and erosion of that rule undermines society generally). And that’s all there is. The medical, psychological, and social aspects add up to what pain is: there’s no mystical component, no qualia.

Picture: Blandula. That just seems mad to me – bonkers and almost diabolical. Surely you can see that the reason pain is bad is because it hurts? That’s all pain is – hurting.

You and your supposed doctor friends are a bit over-confident about your amnesia anyway, aren’t you? The NYT article reports evidence that pain experienced early on affects a child’s responses later. Do you really feel confident that agonising episodes early on — even in the womb — are not lurking in some damaging form in the subconscious of the child, or even the adult?

Controllers in the brain

Picture: Fat Controller.

Asim Roy insists (Connectionism, controllers and a brain theory) that there are controllers in the brain. This is not as sinister as it might sound.

Roy presents his views as an attack on connectionist orthodoxy. Connectionists, he says, believe that the brain does not have in it groups of cells that control other parts of the brain. He cites many sources, and quotes explicitly from Rumelhart, Hinton, and McClelland, who say

“There is one final aspect of our models which is vaguely derived from our understanding of brain functioning. This is the notion that there is no central executive overseeing the general flow of processing…”

Roy begins by addressing a startlingly radical argument against the notion of controllers in any system. He takes up the analogy of a human being driving a car. The steering and other systems, according to a normal understanding, are controlled by the driver; but, he says, there is an argument that this is the wrong way of looking at it. It’s true that the steering is guided by input from the driver, but there is also feedback to the driver from the various systems of the car, which also determine what the driver does. The current position of the wheel and the response of the car determine how much force the driver applies to the wheel. So really the car is controlling the driver as well as vice versa, and the very idea of a controller within a system is a misapprehension.

This is obviously a slightly silly argument, but it serves to show that we need to define the notion of a controller properly. Roy suggests that the essence of a controller is that it is not dependent on inputs. The steering moves only in response to my inputs, whereas if I choose (and am tired of life, presumably) I can ignore any feedback from the car and turn the wheel any arbitrary number of degrees I settle on. Similarly, although my channel-hopping is normally guided by what appears on the TV screen, if I wish I can choose to change channels arbitrarily, or tap out a rhythm on the remote control which has nothing to do with the TV. A controller, in short can operate in different modes while a subservient system cannot.Armed with this definition, Roy argues that a connectionist network which learns by back-propagation requires an external agent to set the parameters, whether it be a human operator or another module within the overall system. I suppose it could be retorted that this controller is indeed external, and exercises its influence only during learning, but Roy would probably say that if we’re modelling the human brain, these controllers would have to be taken to be part of the neural set-up. In any case, he goes on to give examples from neuroscience to show that some parts of the brain do indeed seem to operate as controllers of some other parts. I must say that this claim seems so evidently true to me that argument in its favour seems almost redundant.

Roy’s conclusion is that his reasoning opens the way to a better approach, where instead of being left to local learning, the methods and weightings to be used can be dictated by a central system, at least on some occasions.

I think Roy’s conclusion that there must be controllers within the brain is hard to disagree with: the question is more whether he’s demolishing a straw man. What did Rumelhart, Hinton, and McClelland actually mean? I suspect they meant to deny the existence of a central ‘homunculus’. a little man in the brain who does all the real work; and also to deny that the brain has a CPU, a place where all the data and instructions get matched together and processed. I don’t really think they meant to deny that any part of the brain ever controls any other part. I’m not sure that connectionists have ever reached the point of proposing an overall architecture for the brain, or even that that would be within the scope of the theory; rather, they just want to investigate a way of working which may be characteristic of parts of the brain.

I can imagine two ways connectionists might respond to Roy’s claims without directly contesting them. One would be to accept that the control function he describes exists, but claim that it doesn’t reside in one fixed place. Different parts of the overall network might be in control at different times; it’s not that bits of the brain don’t control other parts, merely that control is sort of smeared around. But equally I can imagine a connectionist simply saying; of course we never meant to deny that that sort of control relation exists, so thanks for the clarification…

I think Roy’s attempt to define what a controller does is interesting, however. A controller, on his view, can follow the inputs, or operate without them. But surely other systems can operate without inputs, too? If I black out and cease to provide the car with control inputs, it doesn’t cease to function. It may function disastrously, but that’s also true if I exercise my controller’s right to ignore inputs and take a kind of existentialist approach to steering. You could say that in cases like the black-out one the controlled system is continuing to receive inputs – they’re just consistently zero. The point really is not operating without inputs, but being able to ignore the ones you’re getting. I think what we’re grasping for here is that the inputs to a controller don’t determine the outputs. That’s interesting because it sounds like one version of free will. When I act freely, my actions were not determined by the environmental inputs. But it’s notoriously hard to explain how that could be so in a deterministic world.

If you’re a softy compatibilist about free will, you may be inclined to argue that it is to some extent a matter of degree. Where input A always gets output B, no question of freedom or controllerhood arises. But if there is a complex internal algorithm working away, such that input A may get any letter of the alphabet on different occasions, things start to look different. And if the outputs begin to show a certain kind of coherence or meaningful salience – if they begin to spell intelligible words – we might be inclined to say that the system is in some sense in control. If when we turn the wheel, the car does not respond, we might gasp metaphorically “The damn thing’s got a will of it’s own!”; further, if it actually directs itself down a side road and into the filling station we might seriously and literally begin to credit the car with intelligence. So far so Dennettian.

If that line of reasoning is right, controllerhood is indeed a tricky business: it might be that the only way to know whether some group of cells is a controller would be to watch it and see whether it did controllery things. And if that’s the case, maybe smeary connectionism has something in it after all…

Phantom Penises

Picture: David. It often happens that when someone has had a limb amputated, they experience feelings in the limb they haven’t got any more – the ‘phantom limb’ phenomenon. The phantoms may be just temporary, a curious by-product of the operation. Sometimes the feelings are partial – where an arm has been removed, for example, patients may feel as though they still had their hand, but attached to the shoulder without any intervening arm. Sometimes the experience is more of a problem, with feelings of intense pain in the amputated part which won’t go away and can’t be treated by normal means.

I therefore winced slightly on learning that as well as phantom limbs, there are phantom penises, experienced by those who have undergone a penectomy (a word which is well worthy of a wince in itself). V.S.Ramachandran, who devised an ingenious way of using mirrors to help people with phantom limb pain, by fooling the brain into briefly believing that the missing limb was back, has now turned his attention to penises, together with P.D. McGeoch. This time the research is not about pain relief, however, but gender identity, where the possession or lack of a penis is clearly highly relevant.
Penectomies, it seems, are performed for two main reasons; to eliminate a malignant cancer, or as part of gender reassignment treatment. Since male-to-female transsexuals typically feel themselves to be ‘a woman in a man’s body’, Ramachandran and McGeoch reasoned that their response to penectomy might well be different from that of other patients. And so it proved: while 58% of men who have undergone penectomy for other reasons reported sensation in a phantom penis afterwards, only 30% of those who had done so as part of gender reassignment had a similar experience. So people who felt that a penis was not part of their true body image were much less likely to experience a phantom penis after removal.

Stranger still, perhaps, 62% of a group of female-to-male transsexuals reported having had phantom penis sensations before any surgery. In many cases the sensations dated back for years: in others, they did not occur until hormone treatment had begun. No non-transsexual women, unsurprisingly, reported the sensation of having a phantom penis (‘even when prompted’ as the researchers say).
Ramachandran and McGeoch conclude that the study backs the view that gender identity feelings are hard-wired into the brain, and in transsexuals may be at odds with actual physical shape. They recognise, however, that there are some potential criticisms of the way the research was done.

One weak point is the risk of confabulation. By asking female-to-male transsexuals whether they had ever had phantom penis sensations, were the researchers discovering a phenomenon, or creating one? Transsexuals often have to struggle for the acceptance of their view of themselves; they have a natural reason to want to assert anything that might strengthen their case. The experience of a phantom penis would clearly be a useful piece of evidence in this context. Since female-to-male transsexuals by definition feel that they ought to have a penis, it may not be much of a leap to say they feel as though they have one, once the possibility is suggested.

To some extent, moreover, male-to-female transsexuals might have been inclined to feel that any report of phantom sensations was letting the side down in some subtle way; suggesting that they or their bodies somehow couldn’t give up the idea of having a penis very readily. Indeed, an old Freudian theory which the researchers pour scorn on, had it that the symptoms of phantom limbs expressed an unconscious desire that the limb was still there, so reasoning along those lines is by no means impossible.

However, the researchers have a number of counterarguments. Perhaps the most striking is that female-to-male transsexuals were often able to report details of the phantom penis and its behaviour, saying that it fell short of their ideal penis, for example. Surely an imagined penis, a wish-fulfilment penis, would be fully satisfactory? Less convincingly, I think, the researchers quote cases where the subjects reported the phantom penis behaving in ways – morning or unprompted erections, for example – which a female subject allegedly would have been unlikely to add to a confabulated account. I suspect the female subjects are likely to have been more aware of this kind of detail than the researchers suppose.

At the end of the day, we seem to have some suggestive evidence, but not a fully convincing case. Ramachandran and McGeoch rightly say that evidence from brain imaging studies would be very useful – notably in establishing whether a pre-operative female-to-male transsexual having a phantom penis experience has similar brain activity to a male having normal penis sensations.

Egocentric Space

Picture: egocentric.

The hypothesis that human beings have not one, but two distinct visual systems, now seems to be widely accepted. There are certainly convincing stories to be told about it both in neurological and functional terms. Neurologically, it seems that two different streams draw on the information provided by the primary visual area of the brain. A ventral stream goes eventually to the inferior temporal cortex, while a dorsal stream goes to the posterior parietal lobe. The brain being what it is, things are not quite that simple, of course, and the two streams are not completely isolated, but there seems to be good enough reason to think of the them as essentially independent.Functionally, there is evidence that the brain deals separately with conscious visual perception on the one hand and ‘automatic’ control of actions on the other. The former system, for example, is easier to fool than the latter. When presented with certain kinds of geometrical optical illusion, we may be deceived consciously about the apparent size of an object, but when we reach out to take the object, our fingers open to just the right size anyway. There’s also evidence from the effects of injuries: damage to the relevant regions of the brain may damage our conscious awareness while leaving us apparently able to use our eyes for practical purposes. Perhaps the most extreme cases are the famous instances of blindsight; subjects who cannot see at all so far as their conscious minds are concerned, but who can point to an object, or reach for one, with much greater accuracy than chance guesses could provide. Ramachandran, for one, has suggested that the two visual streams provide a satisfying answer to the puzzle of blindsight.

It is therefore an attractive hypothesis that the two streams serve regions of the brain with complementary roles, one delivering conscious visual experience and the other feeding into accurate motor control; one stream telling you what, and the other where, as some describe it. It seems to me, incidentally, that there is an interesting side-implication here about our fellow primates. Since they have the same two-stream system (in fact, I believe it was originally discovered in macaques), the implication is that they must have the same experience of doing some things deliberately, and others unthinkingly. We might have been tempted to think that animals, relying on instinct, operate on a kind of permanent autopilot, always in the same sort of state we are in when walking along without thinking of where we’re going; but that seems to be ruled out at least as far as primates are concerned.

Be that as it may, a further step has been taken by many researchers, who propose that the two visual systems must encode information differently. The system which delivers conscious perception, drawing on the ventral stream, must surely encode the positions of objects allocentrically, ie in relation to the scene before us, rather than egocentrically, ie plotting distance and direction from the observer. Since this system is concerned above all with identifying things, it’s surely most helpful to have things encoded in a way that doesn’t change every time we move around the room, they argue. In the case of the other system, where the top priority is such tasks as deciding how far to extend your arm in order to grab something, an egocentric scheme of coding makes more sense. This view is buttressed by an argument that the allocentric coding of the perception system is what makes it more vulnerable to optical illusions.

However, in a paper for Philosophy and Phenomenological Research, Robert Briscoe now rides to the rescue of egocentricity. He thinks that our conscious perception can perhaps be regarded in this respect as an extension of proprioception, the special sense which tells us where the different parts of our body are; and both, he claims, are plausibly based on an egocentric coding scheme.

He dissociates himself from two positions which appear to have been severely undercut by the two-systems hypothesis. The first, Experience-Based Control (EBC) asserts that our fine motor control draws on the richness of our conscious perception (not all of which necessarily gets our full attention); the other, the Grounding Hypothesis, is even stronger, claiming that it is necessarily grounded in conscious experience. All such views are rejected by Briscoe, which leaves him free to sidestep the evidence which tells against them. Turning to the arguments based on optical illusion, he argues that the undeniably greater vulnerability of conscious experience might not stem from allocentric coding, but from the diversity of sources drawn on by conscious experience, and hence be attributable to difficulties with integration.

And after all, he concludes, moving from defence to attack, the two systems do in fact communicate and co-operate, with conscious experience identifying the targets and the other stream sorting out the details of the movements required. If they use different coding schemes to represent the positions of objects, there are going to be some problems of translation which don’t arise if both systems are egocentric.

This all seems fairly convincing to me, but I wonder whether the dispute will eventually turn out to have been misconceived: one of those either/or disputes where neither hypothesis actually catches the truth properly. Perhaps allocentric and egocentric coding aren’t really the only alternatives and one or both systems operate on some mixed, intermediate,or entirely different principle. In fact, Briscoe’s own views hint at this to some degree. In his view, conscious perception is an extension of proprioception; but proprioception does not, he says relate everything to some arbitrary central point – rather, it relates different body parts to each other. If we extend that approach to the external world, we seem to get a system which relates objects to each other rather than a central observer; that sounds as if it has a tinge of allocentrism about it. I accept that a clear distinction can be drawn in practice between the ability to make allocentric and egocentric judgements, but that difference doesn’t have to be reflected all the way down to the level of the coding schemes involved.

Briscoe goes on to offer a few concluding remarks which perhaps reveal something of his underlying motivation: in essence, he wants to preserve a strong connection between conscious perception and agency. I sympathise with this, even with the strong claim that we need at least a potential for agency before we can have perception. But I remain to be convinced that egocentric coding of conscious visual experience is indispensable to the cause.

What’s wrong with computation?

Picture: Pinocchio. You may have seen that Edge, following its annual custom, posed an evocative question to a selection of intellectuals to mark the new year. This time the question was ‘What have you changed your mind about? Why?’. This attracted a number of responses setting out revised attitudes to artificial consciousness (not all of them revised in the same direction). Roger Schank, notably, now thinks that ‘machines as smart as we are’ are much further off than he once believed, though he thinks Specialised Intelligences – machines with a narrow area of high competence but not much in the way of generalised human-style thinking skill – are probably achievable in the shorter term.

One remark he makes which I found thought-provoking is about expert systems, the approach which enjoyed such a vogue for a while, but ultimately did not deliver as expected. The idea was to elicit the rules which expert humans applied to a particular topic, embody them in a suitable algorithm, and arrive at a machine which understood the subject in question as well as the human expert, but didn’t need years of training and never made mistakes (and incidentally, didn’t need a salary and time off, as those who tried to exploit the idea for real-world business applications noted). Schank contrasts expert systems with human beings: the more rules the system learned, he says, the longer it typically took to reach a decision; but the more humans learn about a topic, the quicker they get.

Now there are various ways we could explain this difference, but I think Schank is right to see it as a symptom of an underlying issue, namely that human experts don’t really think about problems by applying a set of rules (except when they do, self-consciously): they do something else which we haven’t quite got to the bottom of. This is obviously a problem – as Schank says, how can we imitate what humans are doing when humans don’t know what they are doing when they do it?

Another Edge respondent expressing a more cautious view about the road to true AI is none other than Rodney Brooks. Noting that the preferred metaphor for the brain has always tended to be the most advanced technology of the day – steam engines, telephone exchanges, and now inevitably computers – he expresses doubt about whether computation is everything we have been tempted to believe it might be. Perhaps it isn’t, after all, the ultimate metaphor.

It seems to me that in different ways Schank and Brooks have identified the same underlying problem. There’s some important element of the way the brain works that just doesn’t seem to be computational. But why the hell not? Roger Penrose and others have presented arguments for the non-computability of consciousness, but the problem I always have in this connection is getting an intuitive grasp of what exactly the obstacle to programming consciousness could be. We know, of course, that there are plenty of non-computable problems, but somehow that doesn’t seem to touch our sense that we can ultimately program a computer to do practically anything we like: they’re not universal machines for nothing.

One of John Searle’s popular debating points is that you don’t get wet from a computer simulation of rain. Actually, I’m not sure how far that’s true: if the computer simulation of a tropical shower is controlling the sprinkler system in a greenhouse at Kew Gardens, you might need your umbrella after all. Many of the things AI tries to model, moreover, are not big physical events like rain, but things that can well be accomplished by text or robot output. However, there’s something in the idea that computer programs simulate rather than instantiate mental processes. Intuitively, I think this is because the patterns of causality are necessarily different when a program is involved: I’ve never succeeded in reducing this idea to a rigorous position, but the gist is that in a computer the ‘mental states’ being modelled don’t really cause each other directly; they’re simply the puppets of a script which is really operating elsewhere.

Why should that matter? You could argue that human mental states operate according to a program which is simply implicit in the structure of the brain, rather than being kept separately in some neural register somewhere; but even if we accept that there is a difference, why is it a difference that makes a difference?

I can’t say for sure, but I suspect it has to do with intentionality, meaningfulness. Meaning is one of those things computers can’t really handle, which is why computer translations remain rather poor: to translate properly you have to understand what the text means, not just apply a look-up table of vocabulary. It could be that in order to mean something, your mental states have to be part of an appropriate pattern of causality, which operating according to a script or program will automatically mess up. I would guess further that it has to do with a primitive form of indexicality or pointing which lies at the foundation of intentionality: if your actions aren’t in a direct causal line with your behaviour, you don’t really intend them, and if your perceptions aren’t in a direct causal line with sensory experience, you don’t really feel them. At the moment, I don’t think anyone quite knows what the answer is.

If that general line of thought is correct, of course, it would be the case that we cannot ever program or build a conscious entity – but we should be able to create circumstances in which consciousness arises or evolves. This would be a blow for Asimovians, since there would be no way of building laws into the minds of future robots: they would be as free and gratuitous as we are, and equally prone to corruption and crime. On the other hand, they would also share our capacity for reform and improvement; our ability, sometimes, to transcend ourselves and turn out better than any programmer could have foreseen – to start something good and unexpected.

Belatedly, Happy New Year!

Reincarnation

Picture: reincarnated entity.

Picture: Blandula. Jonathan Edelmann and William Bernet have made a sterling effort to bring reincarnation within the pale of scientific investigation. Their paper, in the latest JCS, is not directly concerned with the reality of reincarnation, but with the methods which could be adopted to ensure the academic credibility of future research.

They put to one side cases where the recollection of a previous life is induced by hypnosis, because of the obvious difficulties in ensuring that the subject is not led or influenced by the hypnotist, and concentrate instead on ‘spontaneous’ cases – those where a child comes up with memories of a former life without any particular prompting.

I say ‘without any particular prompting’, but the authors recognise that many alleged cases of reincarnation occur within cultures where a belief in the phenomenon is a religious obligation, and where family members are likely to prompt and encourage any signs of recollection displayed by a child. Indeed, one of their main concerns is to establish interview procedures which will eliminate direct family influence, provide an objective assessment, and include a comparison of the household described in the child’s recollections with a control household.

Picture: Bitbucket. I salute their aspiration to scientific rigour, but their efforts are tragically misplaced. Scientific investigation is wasted if the hypothesis under investigation is incoherent, and I think the notion of reincarnation is pretty much unsalvageable. It rests on a confused conception of identity, and once your ideas about identity are clarified – in any rational way – it becomes absurd. To put the problem at its most general: proponents of reincarnation accept any resemblance between dead and live persons as evidence for reincarnation- it can be personality, memories, tastes, abilities, or even physical characteristics. But if all properties are equally signs of identity, the missing resemblances are as salient as the present ones. If a gift for juggling is claimed as a sign of identity between dead A and live B in one case, I’m entitled to point out that dead X and live Y differ in their juggling abilities, though allegedly Y remembers X’s life. But this is never allowed; only the points of resemblance are ever considered. Frankly, it’s superstition.

Picture: Blandula. I don’t think that’s right. The best evidence for reincarnation isn’t from resemblances of that kind, but the recollection of factual information about previous lives. The ability to produce detailed memories of a former existence is so remarkable that even a few instances constitute striking evidence. The fact that some other details are not recalled does not cancel that evidence out. If I could describe my car and tell you its registration number, that would be good evidence that I really had at least seen it before: the fact that I couldn’t tell you what was inside it or what brand name was on the battery wouldn’t disprove that.

Edelmann and Bernet do seem to countenance a range of different evidence in principle, though: they begin by quoting the four-point SOC (Strength of Case) scale developed at Virginia University. The four points are, briefly:

  • birthmarks/defects that correspond to the previous life;
  • strength of statements about the previous life;
  • relevant behaviours that relate to the previous life; and
  • possible connections between present and previous life

Picture: Bitbucket. See what I mean? Look at that first point. Birthmarks and defects? Look, reincarnation is supposed to be the transfer of a soul into another body, not the transfer of a body into a body. If birthmarks and defects are transferred, why not the disease that killed the original person? Why not the signs of old age? If physical traits are transferred, why don’t babies get reincarnated as old people? In fact, why don’t they come back as corpses?

Picture: Blandula. You’re the last person who should be surprised that mental traits can have a physical expression. I’m not asserting that birthmarks are signs of reincarnation, but if you believe in souls, there’s nothing contradictory in supposing that certain physical characteristics impress themselves on the spirit in a way that others don’t. and that these impressed characteristics can then be echoed in the body when the spirit arrives in a new corporeal host.

Anyway, let me finish the exposition before you start arguing – as I explained, we’re addressing methodological issues here rather than the reality of reincarnation in itself.

Edelmann and Bernet propose four phases of research. In the first, the child is questioned about its earlier life in a videotaped interview conducted by professionals, who seek to draw out clear, specific and verifiable information. A second group of researchers evaluates the data gathered in phase 1, checking on the child’s life and circumstances to eliminate the possibility of their having acquired information about a previous life by normal means. In this respect, the authors note that the best subjects are likely to be young children, since they are least likely to have been able to research earlier lives or pick up data by normal means of communication. The second group of researchers go on to draw up a list of 20 ‘descriptors’ – items about the supposed previous life drawn from the interviews with the child. They also identify the site of the earlier life and another superficially like it. They might, for example, find the house where the child claims to have lived, and then pick a house with the same number in a different street.

On to phase 3. A further group of researchers is now given the two addresses (without being told which is which) and the list of descriptors: they then score both sites according to how many of the descriptors apply. In the final phase, the whole exercise is re-examined for flaws or mistakes, and the results evaluated statistically.

Sadly, no research along these lines has taken place, but it seems to hold out the possibility of opening up reincarnation for proper scientific research.


Picture: Bitbucket. The trouble is, the research is still going to be tainted, isn’t it? We’re dealing with cultures where reincarnation is accepted; how can they ever eliminate the possibility that Mum and Dad have hit on the idea that Junior is the reincarnation of Great-Uncle Jasper, and told him all sorts of stuff they know about the old man and his life? It seems hopeless to me. Maybe it would work if they could show that Junior was able to remember Uncle Jasper’s bank account password, or something else that no-one else would have known.

But it’s hopeless on a deeper level. What are the criteria for personal identity? Nobody really has an uncontroversial answer, so how can we begin making claims that a particular live person is identical with a certain dead one?

After all, I am not exactly the same as I was a year ago, let alone as I was when I was five: some people would say that my claim to be identical with that five year old is really only a kind of polite or convenient fiction. I wouldn’t go that far: it seems to me that biological continuity is a pretty good indicator of identity. I’d be inclined to make death and birth the ending and beginning of new individuals more or less by definition – so even if you are just like Uncle Jasper, and even share some of his memories, that doesn’t mean you are him.

Picture: Blandula. I accept that there are issues about identity. It might be that reincarnation doesn’t turn out to be what we think it is. Edelmann and Bernet suggest that if reincarnation really happens, we must transform our view of the ontology of consciousness, rule out reductive materialism, and look again at non-physical views. I think they’re only partly right: it seems perfectly feasible to me to come up with a version of reincarnation which is compatible with materialism. Just to take an easy example: suppose consciousness really is a kind of electromagnetic buzz, as people like Pockett and McFadden have supposed. It seems prima facie possible that this buzz could get echoed or stored in some way and have an effect on the emergent buzz of an infant, transferring memories and personality traits – perhaps even some physical ones.

The way I see it, the first step is to establish the reality or otherwise of the transfer of memory – metemmnemonism? – along the lines Edelmann and Bernet have suggested. Then we can have the discussion about whether ‘metemmnemonism’ implies metempsychosis.


Picture: Bitbucket. Gotta love your idea of rigorous scientific materialism there – no souls, just ‘buzzes’.

I’d be happy with the idea of this research if I didn’t know how it would go. Suppose the research takes place and finds no reliable evidence of reincarnation. Will the researchers then conclude the thing is disproved, case closed? No: they’ll say they failed to find evidence, but someone should have another go. Negative results will not be counted, but when some idiot messes up the procedures and gets an invalid positive result, it’ll be acclaimed and enter the mythology as cast-iron proof. That’s paranormal research for you – we may not get reincarnated, but the discredited theories always come back from the dead.

Call me Four Brains

Picture: brains. The New Scientist had a rather rambling piece on recent ideas about the subconscious the other day. One thing I thought was interesting was the four-part model of the mind it described, attributed to Dayan, Daw and Niv. I haven’t been able to find a paper which actually sets out the four-part system: the one referred to by the New Scientist is really about how control is passed between two of the four systems – but the gist is fairly clear. One of the appealing things about this line of thought is that each system has a fairly robust basis in neurology, with evidence of functions being localised in particular brain areas; but also a rationale in terms of what the system does for us and why this particular combination of systems might work well.

To begin with, we have the ‘Pavlovian’ system. Pavlov, of course, is famous for his work on conditioned reflexes: training dogs to salivate at the sond of a bell, and so on. Here the term is used rather loosely to cover instincts and a range of ‘automatic’ behaviour. It may in fact seem doubtful whether all such behaviour stems from a single system, and indeed in this case a single clear neurological location doesn’t exist, though various places have a role. The chief advantage of behaviour this hard-wired is clearly speed, and the main limitation is the stereotyped nature of the behaviour: well-suited to cases where the required action is clear and urgent, and not so good elsewhere.

Then we have the habitual controller, which I think we could call the ‘autopilot’: responsible for more complex learned behaviour. This system can take control of behaviour for an extended period, and respond appropriately to inputs so long as they fall within the expected range; so that when driving, for example, we may take account of the progress of the car in front without having to think about it consciously. This second system can cope with much more complex problems than the Pavlovian one, but it is still largely stereotyped and when something unexpected comes up it hands over abruptly to another system.

That might be the episodic controller, which produces behaviour which makes decisions in a conscious but unreflective manner, drawing on memory of what happened before. It works well in circumstances where events are as it were, following a script and the broad outlines of what is going on are known. Although it takes a little more time and thought than either of the preceding systems, it is still fast and it has the advantage of working relatively well when detailed knowledge is not available: so long as the circumstances are broadly familiar, we can go on producing generic behaviour which is likely to be appropriate.

The fourth system is the goal-directed controller; here for the first time the brain attempts to look forward and decide which behaviour is going to achieve its goals. It is slower and consumes more resources than the other systems, and it can only really work where enough information is available, but these disadvantages are outweighed by its sophistication and flexibility. Dayan et al describe it as searching through a tree of possibilities, which I think may under-rate or misdescribe it – the brain seems to be able to devise goal-oriented strategies by some other means than working through alternatives – but that doesn’t invalidate the overall model.

So if the model has a good basis in both neurology and function, how does it fare against introspection – does the mind really feel like a four-part operation from the inside? I think in many respects the model is recognisable, though I’d quibble over some details. The main weakness, as I see it, is that the perspective which the whole thing is based on is very much one in which the function of the mind is to produce good output behaviour from a range of different environmental inputs. That’s fine as far as it goes, but consciousness is generally taken to be about awareness and experience as much as decision-making. When I’m sitting still and admiring the view across the bay, which system is working? None, it would appear, but in old-fashioned two-part terminology, I should have said that both conscious and subconscious were hard at work. Perhaps all the systems are working in harmony to produce the appropriate output behaviour of not doing anything?

Laughing computers

Picture: smiling computer. We’ve discussed here previously the enigma of what grief is for; but almost equally puzzling is the function of laughter. Apparently laughter is not unique to human beings, although in chimps and other animals the physical symptoms of hilarity do not necessarily resemble the human ones very closely. Without going overboard on evolutionary explanation, it does seem that such a noteworthy piece of behaviour must have some survival value, but it’s not easy to see what a series of involuntary and convulsive vocalisations, possibly accompanied by watering eyes and general incapacitation, is doing for us. Shared laughter undoubtedly helps build social solidarity and good feeling, but surely a bit of a hug would be fine for that purpose – what’s with the cachinnation?

Igor M. Suslov has a paper out, building on earlier thoughts, which presents an attempt to explain humour and its function. He thinks it would be feasible for a computer to appreciate jokes in the same way as human beings; but the implication of his theory seems to be that a sophisticated computer – certainly one designed to do the kind of thinking humans do – would actually have to laugh.

Suslov’s theory draws on the idea (not a new one) that humour arises from the sudden perception of incongruity and the resulting rapid shift of interpretation. When cognitive processes attain a certain level of sophistication, the brain is faced with many circumstances where there are competing interpretations of its sensory input. Is that a bear over there, or just a bush? The brain has to plump for one reading – it can’t delay presenting a view to consciousness until further observations have resolved the ambiguity for obvious practical reasons – and it constructs its expectations about the future flow of events on that basis: but it has the capacity to retain one or two competing interpretations in the background just in case. In fact, according to Suslov, it holds a number of branching future possibilities in mind at any one time.

The brain’s choice of scenario can only be based on an assessment of probability, so it is inevitably wrong on occasion –  hey, it’s not a bear, after all! In principle, the brain could wait for the currently assumed scenario to drain away naturally when it reached its current end: but the disadvantages of realising one’s error slowly are obvious. Theoretically another alternative would be to delete all recollection of the original mistake: but the best approach seems to be to tolerate the fact that our beliefs about the bush conflict with what we remember believing. The sudden deletion of the original interpretation is the source of the humorous effect.

Suslov has drawn on the views of Spencer, which had it that actual physical laughter was caused by the discharge of nervous energy from mental process into the muscles. This theory, once popular, suffered the defect that there really is no such thing as ‘nervous energy’ which behaves in this pseudo-hydraulic style; but Suslov thinks it can be at least partially resurrected if we think of the process as excess energy arising from the clearance of large sections of a neural network (when a scenario is deleted). He recognises that this is still not really an accurate biological description of the way neurons work, but he evidently still thinks there’s an underlying truth in it.

One further point is necessary to the plausibility of the theory, namely that humour can be driven out by other factors. We may laugh when we realise the ‘bear’ is really a bush, but not when we make the reverse discovery. This is because the ‘nervous energy’, if we can continue to use that term, is directed into other emotions, and hence goes on to power shaking with fear rather than laughter. Suslov goes on to explain a number of other features of humour in terms of his theory with a fair degree of success.

An interesting consequence if all this were true, it seems to me, is that a network-based simulation of human consciousness would also necessarily be subject to sudden discharges. It seems to me this could go two ways. Either the successful engineers are going to notice this curious and possibly damaging property of their networks, or at some stage they are going to encounter problems (the frame problem?) which can in the end only be solved by building in a special rapid-delete facility with a special provision for the tolerance of inconsistency. Use of this facility would amount to the machine laughing.

Would it, though? There would be no need, from an engineering point of view, to build in any sound effects or ‘heave with laughter’ motors. Would the machine enjoy laughing, and seek out reasons to laugh? There seems no reason to think so, and it is a definite weakness of the theory that it doesn’t really explain why humour is anything other than a neutral-to-unpleasant kind of involuntary shudder. Suslov more or less dismisses the pleasurable element in humour: it’s more or less a matter of chance, he suggests, just as sneezing happens to be pleasant without that being the point of it. It’s true that humans are good at taking pleasure in things that don’t seem fun at first sight; making the capsaicin which is designed to deter animals from eating peppers into the very thing that makes them taste good, for example. But it’s hard to accept that funny things are only pleasant by chance; it seems an essential feature of humour is being left on one side.

It’s also possible to doubt whether all humour is a matter of conflicting interpretations. It’s true that jokes typically work by suddenly presenting a reinterpretation of what has gone before. Suslov claims that tickling works in a similar way – our expectations about where the sensation is coming from next are constantly falsified. Are we also prepared to say that the sight of someone slipping on a banana skin is funny because it upsets our expectations? That might be part of it: but if conflicting interpretations are the essence of humour, optically ambiguous figures like the Necker cube should be amusing and binocular rivalry ought to be hilarious.
There are of course plenty of technical issues too, apart from the inherent doubtfulness of whether the metaphor of ‘nervous energy’ can really be given a definite neurological meaning.

One aspect of Suslov’s ideas ought to be testable. It’s a requirement of the theory that the discarded interpretation is deleted, otherwise there is no surplus ‘nervous energy’. But why shouldn’t it simply recede to the status of alternative hypothesis? That seems a more natural outcome. If that were what happened, we should be ready to change our minds back as quickly as we changed them the first time: if Suslov is right and the discarded reading is actually deleted, we should find it difficult to switch back to the ‘bear’ hypothesis once we’ve displaced it with the ‘bush’ reading. That ought to show up in a greater amount of time needed for the second change of mind. I doubt whether experiments would find that this extra delay actually occurs.

The silence of the apes

Picture: Bonobo. This piece by Clive Wynne reviews the well-known attempts which have been made to teach chimps (or bonobos) to use language, and draws the melancholy conclusion that the net result has in the end merely confirmed that grammar is uniquely human. It seems a fair assessment to me (though I always find it difficult not to be convinced by some of the remarkable videos which have been produced) , but it did provoke some thoughts that had never occurred to me in this connection before.

According to Wynne, the chimps show clear signs of recognising a number of nouns, but no sign of either putting the nouns in the right order, or recognising the significance of the order in which they have been put by humans. They cannot, in other words, distinguish between ‘snake bites dog’ and ‘dog bites snake’, which is a key test of grammatical competence.

But word order is obviously not the whole story so far as grammar is concerned. One of the first things you learn in Latin is that in that language, although there may be a preferred word order, it isn’t grammatically decisive: ‘Serpens mordet canem’ means the same as ‘Canem mordet serpens’ (to express the reversed relationship, you’d have to say ‘Canis mordet serpentem’). Perhaps apes just have trouble with grammars like that of English which rely on word order; perhaps they would do better with a language which used inflection, or some other grammatical mechanism instead? Was failure, in short, built into these experiments just as surely as it was into the doomed earlier attempts to teach them to speak?

Two quite different languages were involved in the different experiments: Washoe and other chimps were taught ASL, a sign language used by deaf humans; Kanzi and others were taught to communicate in specially-created lexigrams, symbols arranged on a keyboard, though the experimenters apparently used spoken English for the most part.

I don’t know much about ASL, but it does appear to use word order, albeit a different one from that in normal English; typically the topic is mentioned first, followed a comment. You can do this sort of thing in English of course (‘That snake – the dog bit it.’), but it isn’t standard. If you want to specify a time in ASL, which might be done with tenses in English, you should mention it first, before the topic. In making your comment, the word order appears to be similar to the standard English one, though there may be some degree of flexibility. My impression is that ASL users would tend to break down the information they’re conveying into smaller chunks than would be normal in English, taking a clause at a time to help minimise ambiguity. There is something called inflection in ASL, but it isn’t the kind of conjugation and declension we’re used to in Latin, and doesn’t play the same grammatical role. In fact, one important grammatical indicator in ASL is facial expression – a possible problem for the chimps, although they could presumably manage some of the basic head-tilting and eye-brow (alright, brow-ridge) raising.

With lexigrams. the relationship to standard English is closer: each of the 384 lexigrams is equivalent to an English word, and indeed some consist of the word written in a particular shape with particular colours. This obviously makes things easy for the experimenters and in some ways for the bonobos, who would otherwise be faced with learning two languages, heard English and spoken lexigram. The grammar involved is therefore essentially English, and if anything the use of lexigrams makes word order even more crucial, since verbs are necessarily invariant and there are no plurals: so we don’t even get the kind of extra clues we might have in an English sentence like ‘The dog bites the snakes’.

Prima facie then, it does seem to me that unless the chimps were naturally at ease with using English-style word order as their sole grammatical tool, they were actually given little scope to demonstrate grammatical abilities by any of these experiments. We can perhaps follow the implications a little further. ASL is not very much like ordinary English in its grammar or structure. The adoption of a different channel of communication by deaf people appears to have called for a very different language. It seems natural to suppose, then, that if we require even more radically different channels to communicate with chimps we need a language even more remote from English. Perhaps both ASL and lexigrams are too strongly adapted for human use: true communication may require a form of language which is novel and as difficult for human beings to learn as the chimps; one in fact which might require some rethinking of how grammar can be expressed (something similar had to happen before it was accepted that ASL and other sign languages had true grammar). But if merely understanding this hypothetical language would be dauntingly difficult for us, it hardly seems probable that we could construct it in the first place.

The only way such a language could be constructed, I think, is if the chimps were able to make an equal contribution from their side, rather than being captives drilled in an essentially human style of communication. If a human and chimp community enjoyed a close but free relationship of real importance to both, possibly based on trade or similar relations of mutual benefit, perhaps the differing conventions of different species could be shared and a kind of pidgin developed, as happened all over the world when Western traders first appeared – although this time it would have to be a non-vocal one. The chances of anything like this happening, if not zero in any case are of course remote, and growing less all the time, so sadly the chances are that if chimps do after all have some grammatical ability, we’ll never really know about it.