The Final Frontier

Picture: Kirk transports. It’s not uncommon for philosophers of mind to refer enviously or scathingly to the transporter they used to have on board the U.S.S. Enterprise, which could dematerialise people in one place and then reconstitute them with exquisite accuracy in some other, distant place. I assume that the way this worked was that the transporter produced a comprehensive physical description of say, Kirk, which was then sent off to the destination and somehow used as the basis of a perfect reconstruction. I’m not sure modern physics allows this sort of thing, though I’ve seen articles which suggest it does, at least on a small scale.

Whatever the physics, the device raises many practical questions. Why didn’t they just transport to places, instead of using a spaceship? Or if that was impossible, why not store crew and equipment as data, and reconstitute them only when needed? Why did they use photon torpedos when a transporter could presumably have scrambled the atoms of any Klingon ship at the press of a button? I’m sure there are good answers to all these questions, but the one that attracts the attention of philosophers is: was the person who arrived on the planet’s surface the same James T. Kirk who stepped into the transporter aboard the Enterprise ?

Bitbucket Of course it would have been the same person! Think of this – every molecule in Kirk’s body gets changed time and again through the normal processes of human metabolism. When he gets reconstructed on the planet’s surface, it’s really no different- he’s just put together from a different set of elementary particles. If the actual particles were what mattered, his identity would be changing all the time as his cells rebuilt themselves from the new molecules derived from his food. When we’re talking about identity, it’s the form and causal relations that matter, not the actual physical stuff – and the transporter reproduces them perfectly.

Blandula That’s all very well, but consider this: if the transporter malfunctions, it can produce two copies of Kirk. Now if there’s one thing that’s certain about personal identity, it’s that each person only gets one – so how can there be two? For that matter, Kirk’s descendants could pick up the echoes of the transporter beam from deep space long after he’s dead and reconstruct a dozen copies – they can’t all be him, can they?

Excuse me interrupting, chaps, but can I introduce another idea? You both seem to be atheists, but most people have always attributed personal identity to an immaterial soul or spirit. Could it be that the question is essentially whether the transporter moves Kirk’s soul along with his body?

Blandula I am not an atheist. In my view, faith and science are different levels of discourse, rather than contradictory accounts – but they do ultimately address the same world. We know that physical events can have spiritual consequences – to take a crude example, a blow on the head can effectively separate your soul from your body. I therefore don’t have any problem with the idea that if Kirk’s physical body were moved by the transporter, his soul would follow. What I challenge is the view that what the transporter does is transport , rather than duplicate.

Bitbucket So once again the spiritual hypothesis turns out to be null – adding nothing to the physical account. But look, in your example, why shouldn’t all the copies be Kirk? I’m not suggesting they go on being the same person after the split, but they all have an equal claim to Kirk’s causal history – they’ve all got his memories and his characteristics. So they’re all Kirk: it’s just that Kirk split into several people. I see that that might cause legal difficulties over his property, but so what?

The point is, in the final analysis Kirk’s identity is a set of data like anything else, and data doesn’t care about the details of the medium it’s recorded in, so long as there is functional equivalence! On your view, Kirk’s identity depends on the identity of the atoms which make up his body – but do atoms have identities? aren’t they just maths, in the final analysis?

Blandula The irony here is that you claim to be more of a materialist than me, yet you want Kirk to be some mathematical abstraction, or some kind of program, whereas I insist that he is a physical object – and that if you disperse that object, he dies. Of course he grows and changes in the normal course of events, but brute physical continuity is what matters.

If your view were true, then Kirk’s consciousness would have to split in order to go with each of his duplicates – but I ask you, isn’t it obvious that consciousness is essentially unitary? My consciousness can come and go, but the idea that it could split is simply unimaginable, isn’t it?

Bitbucket Not to me. Anyway, some people find death unimaginable – doesn’t mean it won’t happen…

Blandula But if Kirk’s identity and consciousness are merely data, then they don’t really have a substantial existence. If he’s just data, then he exists just as much in the unreconstructed transporter signal as in the reconstruction – the fact that the machine has run up a physical instance of the data isn’t that important. And actually, even the signal doesn’t matter that much, since the numbers don’t actually depend on the signal for their existence – they exist in some Platonic sphere, anyway. But that’s true of all the possible sets of data. So on your view, there’s no real difference between people who do exist and those who don’t but might have…

Bitbucket Frankly I think you’re a little confused….

The Good, the Bad, and the Conscious

Picture: Clint. Bitbucket A team led by Uri Hasson at Tel Aviv University used an fMRI scanner to monitor the brains of a series of subjects while they were each shown the same 30 minutes of ‘The Good, the Bad, and the Ugly’.

“We reasoned that such rich and complex stimulation will be much closer to ecological vision relative to the highly constrained visual stimuli used in the laboratory”, he explained.

The results showed a remarkable consistency – the same film evoked just the same patterns of neural activity in all the subjects. In itself this might not seem extraordinary, until you consider the implications – this is strong evidence that the experience of watching the film is the same for everyone. I think many people would have expected that individuals’ different neural wiring would influence their response, and cause them to process the images in quite different ways. Well, it seems they don’t. So what does that tell us about private, unique, incommunicable qualia?

Blandula Nothing, of course. You can’t read anything into these results. We already knew that the actual patterns on the retina get relayed right across the brain with only a small amount of topological distortion. I’d bet that most of what the Tel Aviv people picked up was a direct echo of what was on the screen. In the second place, watching a film is an inherently passive business. When you follow a film, you automatically stop thinking about anything else. If the subjects had been asked to watch real people enacting a situation in which they were, or could be, involved, it would have been different.

Bitbucket Well, I agree there are limits to what you can read into this particular project, but in my view it puts the writing on the wall. Qualia are going the way dreams went. For hundreds of years dreams were a deep philosophical mystery; then we discovered that everybody dreams during REM sleep, and that to some extent the content of the dreams can be influenced by outside stimuli. Now all this was pure empirical science, based on waking the subjects up and asking them what they had just been dreaming. In principle, it doesn’t touch the philosophical problem at all. The philosophers should have been in there asking how the scientists knew, or could know, that reports given after waking truly reflected dreams which occurred while asleep. Couldn’t it be that the act of waking people up from REM sleep caused a false memory of a preceding dream they’d never actually had, for example?

But in fact, I don’t remember any significant resistance along these lines. The truth is, once the gap in our scientific knowledge had been filled, the residual philosophical point no longer seemed important. You could still be philosophically doubtful about, say, whether people dream every night, but only in the same kind of way you could be doubtful about whether anyone else has a mind at all – nobody doubts such things in practice.

Within a few years, science is going to offer direct unequivocal data showing that the red you see is the same as the red I see. That won’t dispel the philosophical “hard problem” of qualia, strictly speaking – but if the philosophers want to maintain the view that it’s a real issue, they’d better man the barricades now. Otherwise, people are just going to stop talking about qualia – and not before time!

Or perhaps it will be like artificial intelligence, where the first few steps were easy and then the ground suddenly dropped away from beneath the researchers’ feet. Perhaps reading people’s minds from brain scans is going to be a bit more difficult than you think.

Bitbucket I don’t say it’s going to be easy. But I’ve got evidence on my side – all you’ve got is hope. Or should that be dogma?

Lucy in Disguise

Picture: Lucy. Bitbucket Steve Grand has written another book, “Growing up with Lucy” about his long-term robot-building project: partly, it seems, because he needs the money, but also because he has some interesting progress to report.

Grand, as you probably know, became famous as the brains behind the computer game ‘Creatures’, which used a far more sophisticated version of ‘artificial life’ than any of its predecessors or competitors (or successors, I suspect). In some ways, however, Grand represents an older tradition of computer games design. There was an era when computers were readily available but processing power, storage, and graphics were still severely limited. In those days it was relatively normal for an individual with a good idea to run up a world-beating game alone in his bedroom or garage. When better graphics came along and computer games started emulating feature films, all that changed. But it is in the same sort of spirit that Grand has set out to build a better robot, believing that one good man with a sound idea can achieve more than a vast instituion governed by committees. He derides those who claim that the geometric increase in computer power predicted by ‘Moore’s Law’ means that true machine consciousness is only a matter of time.

What exactly is he trying to achieve? Not a human level of consciousness, though he appears to have set his sights on an orang-utan, surely among the most thoughtful and reflective of animals. He believes that building a physical robot, and one which works in the same way as a real animal (its voice generated by real mechanical mouth/throat action rather than a chip, for example) is the most enlightening way to carry out this exercise, and he has some interesting ideas about how neural” systems might be set up in such a way that they structure and organise themselves merely through experience.

Blandula I don’t really know why someone with Grand’s programming background is insisting on building a physical robot rather than simulating the whole thing. So long as it was a sufficiently scrupulous simulation, I don’t see any issues of principle. Having to actually build the mechanical and electronic apparatus means he might easily get held up by practical engineering problems which are really nothing to do with the key issues.

There seem to me to be a few contradictions (or tensions, at least) in what he says. He says it’s no good building bits of an organism in the hope of later patching them together – you have to have the whole thing for it to work properly. Yet he’s missing out the higher levels of consciouness, which are surely deeply implicated in vision, for example (Grand himself insists that vision and other mental processes are a mixture of bottom-up and top-down pathways, which he refers to as ‘Yin’ and ‘Yang’). He scorns AI gurus who think they can get to consciousness by gradually improving what they’ve already got – he says it’s like trying to get to the Moon by gradually getting better at jumping – but at the same time he seems to think that a system of servos for a model glider gives him the beginning of a path towards genuinely mental processes. Worst of all, though, he denies trying to produce real human-level consciousness, but puts a grotesque rubber mask, red wig, and dummy second eye on his robot, and constantly refers to it as his daughter.

Bitbucket I knew you were going to take this line! Calling Lucy his daughter is really just a joke, for heaven’s sake, but it does have the value of being provocative, helping to attract attention and (perhaps) funding. And I think it’s reasonable to claim that it helps him if he thinks of Lucy in personal terms – it sets the right context.

Blandula That line of reasoning reminds me vividly of a book I once read by an old-school ventriloquist. He insisted that you should always refer to the dummy as your ‘partner’, and treat it as a person, if you wanted your performance on stage to have real conviction. The difference is that ventriloquists admit they are trying to create an illusion! I’m particularly unhappy about that second eye. It actually seems to be a ping-pong ball or something, and looks nothing like the functioning camera eye. But on the dust-jacket of the book, it has been photoshopped to look exactly like the camera eye, reducing the repellent Frankenstein quality which I’m afraid Lucy certainly has. Retouching your pictures like this is perilously close to the line that separates window-dressing from actual deception.

The point about attracting attention is all very well, but as well as getting publicity the mask raises suspicions which actually harm Grand’s credibility. Without it, people may think, it would be all too apparent that what Grand describes as one of the most advanced research robots in existence could also be described as a camera with non-functional arms. Of course, it may be that both descriptions are valid…

Bitbucket I’m sorry, I’m just not going to waste any more time arguing about presentation when we could be talking about Grand’s actual ideas.

He rejects the idea that there is a straightforward ‘algorithm of thought’ and also most current connectionist models, which he says bears no resemblance to real neural structures. He does say that he thinks you need a robot complete enough to tackle something other than ‘toy’ problems, but I don’t accept that that conflicts with his approach to Lucy, who – alright, which – is just a test-bed for ideas anyway.

His working hypothesis is that there is a basic standard pattern of neuronal organisation which is used over and over again for slightly different functions in different parts of the brain. So instead of bolting together a set of incompatible specialised modules, the task is to find a single ‘component’ that can be endlessly re-customised. Basically, this comes down to neural maps which he thinks can carry out co-ordinate transforms, thereby controlling behaviour in the same sort of way as – yes – servos on an automated model glider. The point here is that the servos effectively try to make a particular quantity – flap angle, rate of roll, angle of bank, and so on up the hierarchy – match the one specified by a higher-level controller. His argument is not the naive one you suggest, but he does point out that if you had an automatic glider which had maximum altitude as its target value, there would be an analogy with a ‘desire’ or ‘intention’ on the part of the glider to stay as high as possible.

Blandula Yes – on the same kind of level as the thermostat’s ‘desire’ to keep the room at the right temperature, or for that matter, the ‘intention’ of a broken glider to plummet crazily to earth.

Bitbucket This is about as close as Grand gets to the big issues of consciousness. Somewhere near or at the top of conscious organisation he speculates that there is a map which sets out a kind of ideal target state for the organism. This could be seen as representing its desires or drives; another of his angles is that a key mental task is predicting events; bringing the prediction map and the desire map into line could be what it’s all about. He remarks that a saccade, an automatic jumping of the eyes towards a sudden movement or other point of interest in the visual field, is always interpreted as a conscious decision, with the strong implication that other forms of consciousness only ‘seem’ to be in control. Taken together with scepticism about free will, I think this gives us a fair idea of his philosophical position, even without further discussion. That’s not to say there isn’t a lot of interesting amd relevant stuff here, though. Grand describes the neuronal structure of the brain and points out how half or more than half the signals in sensory areas are actually going in a top-down direction; perception, he suggests, is anything but passive. He reckons there is a kind of tripod structure of paired up and down paths. He has some rather speculative ideas about how neurons might spontaneously organise themselves into ‘pinwheel’ structures that help to identify edge orientation and he describes a method which the brain just might be able to use to identify shapes regardless of size and orientation. All this is pretty speculative, but it has a new and promising feel.

Blandula I don’t know about that. It seems to me some of this stuff is unlikely (there are a few rather serious problems about mental representations of people’s desires which don’t arise in the case of automatic glider-steering systems) and the rest is not very original. The idea that neuronal maps could do co-ordinate transforms, and thereby make eyes saccade towards a point of interest, and arms reach out towards it, was covered by Paul Churchland in a fairly well-known paper in Mind in 1986 – ‘Some Reductive Strategies in Cognitive Neurobiology’, where he also suggested the same kind of maps might have other interesting uses. But saccading eyes are a favourite trick of AI robots – Rodney Brooks’ Cog, for example. I’m sorry to come back to this point again, but it’s hard not to feel that eyes that follow you round the room are popular because they make the robot ‘seem alive’ in the same way as Lucy’s mask. It looks as if the AI people are so tired of getting nowhere that they are tempted towards shallow but crowd-pleasing tricks just to get some popular approval. Look at Kismet, another robot from Brooks’ laboratory, mainly the work of Cynthia Breazeal: it has big eyes, lips, and other facial features which it uses to interact with people, gurgling at them, looking them in the eye, and so on. It’s supposed to have internal states analogous to emotions, and it apparently requires fifteen different computers, but frankly it seems to me to bear an embarassingly clear resemblance to a Furby.

Bitbucket Scoff all you like: I think an edge of eccentricity is part of Steve Grand’s appeal. You never know quite what he might come up with.

Blandula I agree that the ‘mad inventor’ personality has a certain charm, and perhaps a certain usefulness. It has its downside too, though. I see that Grand has funding as a NESTA ‘Dreamtime’ Fellow . It’s nice to be called a dreamer in some ways, but it’s an ambivalent kind of compliment…