Four zomboids

Picture: zomboids. I’ve suggested previously that one of the important features of qualia (the redness of red, the smelliness of Gorgonzola, etc) is haecceity, thisness. When we experience redness, it’s not any Platonic kind of redness, it’s not an idea of redness, it’s that. When people say there is something it is like to smell that smell, we might reply, yes: that. That’s what it’s like. The difficulty of even talking about qualia is notorious, I’m afraid.

But it occurred to me that there was another problem, not so often addressed, which has the same characteristic: the problem of one’s own haecceity.

Both problems are ones that often occur to people of a philosophical turn of mind, even if they have no academic knowledge of the subject. People sometimes remark ‘For all we know, when I see the colour blue, you might be seeing what I see when I see red’, and similarly, thoughtful people are sometimes struck by puzzlement as to why they are themselves and not someone else. That particular problem has a trivial interpretation – if you weren’t you, it would be someone else’s problem – but there is a real and difficult issue related to the grand ultimate question of why there is anything at all, and why specifically this.

One of the standard arguments for qualia, of course, is the possibility of philosophical zombies, people who resemble us in every way except that they have no qualia. They talk and behave just like us, but there is nothing happening inside, no phenomenal experience. Qualophiles contend that the possibility of such zombies establishes qualia as real and additional to the normal physical story. Can we have personhood zombies, too? These would be people who, again, are to all appearances like us, but they don’t have any experience of being anyone, no sense of being a particular self. It seems at least a prima facie possibility.

That means that if we consider both qualia and selfhood, we have a range of four possible zomboid variants. Number one, not in fact a zombie at all, would have both qualia and the experience of selfhood – probably what the majority would consider the normal state of affairs. His opposite would have neither qualia nor a special sense of self, and that would be what a Dennettian sceptic takes to be the normal position. Number three has a phenomenal awareness of his own existence, but no qualia. This is what I would take to be the standard philosophical zombie. This is not really clear, of course: I assume the absence of discussion of the self in normal qualia discussion implies that zombies are normal in this respect, but others might not agree and some might even be inclined to regard the sense of self as just a specific example of a quale (there are, presumably, proprioceptive qualia, though I don’t think that’s what I’m talking about here), not really worthy of independent discussion.

It’s number four that really is a bit strange; he has qualia going on inside, but no him in him: phenomenal experience, but no apparent experiencer. Is this conceivable? I certainly have no knock-down argument, but my inclination is to say it isn’t: I’m inclined to say that all experiences have an experiencer just as surely as causes have effects. If that’s true, then it suggests the two cases of haecceity might be reducible to one: the thisness of your qualia is really just your own thisness as the experiencer (I hope you’re still with me here, reader). That in turn might mean we haven’t been looking in quite the right place for a proper account of qualia.

What if number four were conceivable? If qualia can exist in the absence of any subjective self-awareness, that suggests they’re not as tightly tied to people as we might have thought. That would surely give comfort to the panpsychists, who might be happy to think of qualia blossoming everywhere, in inanimate matter as well as in brains. I don’t find this an especially congenial perspective myself, but if it were true, we’d still want to look at personal thisness and how it makes the qualia of human beings different from the qualia of stones.

At this point I ought to have a blindingly original new idea of the metaphysics of the sense of self which would illuminate the whole question as never before. I’m afraid I’ll have to come back to you on that one.

Blue Brain – success?

Picture: Blue Brain. A bit of an update in Seed magazine on the Blue Brain project. This is the project that set out to simulate the brain by actually reproducing it in full biological detail down to the behaviour of individual neurons and beyond: with some success, it seems.

The idea of actually simulating a real brain in full has always seemed fantastically ambitious, of course, and in fact the immediate aim was more modest: to simulate one of the columnar structures in the cortex. This is still an undertaking of mind-boggling complexity: 10,000 neurons, 30 million synaptic connections, using 30 different kinds of ion channel. In fact it seems the ion channels were one of the problem areas; in order to get good enough information about them, the project apparently had to set up its own robotic research. I hope the findings of this particular bit of the project are being published in a peer-reviewed journal somewhere.

However, the remarkable thing is that it worked: eventually the simulated column was created and proved to behave in the same way as a real one. So is the way open for a full brain simulation? Not quite. Even setting aside the many structural challenges which surely remain to be unravelled (don’t they – the brain isn’t simply an agglomeration of neocortical columns?) Henry Markram, the project Director, estimates that an entire brain would require the processing of 500 petabytes of data, way beyond current feasibility. Markram believes that within ten years, the inexorable increase in computing power will make this a serious possibility. Maybe: it doesn’t pay to bet against Moore’s Law – but I can’t help noticing that there has been a big historical inflation in the estimated need, too. Markram now wants 500 petabytes: a single petabyte is 1015 bytes; but in 1950 Turing thought that 1015 bits represented the highest likely capacity of the brain, with about 109 enough for a machine which could pass the Turing Test. OK, perhaps not really a fair comparison, since Turing had nothing like Blue Brain in mind.

One criticism of the project asks how it judges its own success – or rather, suggests that the fact that it does judge its own success is a problem. If we had a full brain which could operate a humanoid robot and talk to us, there would be no doubt about the success of the project; but how do we judge whether a simulated neuronal column is actually working? The project team say that their conclusions are based on scrupulous comparisons with real biological brains, and no doubt that’s right; but there’s still a danger that the simulation merely confirms the expectations fed into it. They came up with an idea of how a column works; they built something that worked like that: and behold, it works how they think a column works.

There is also undeniably something a bit strange about the project. Before Blue Brain was ever thought of, proponents of AI would sometimes use the idea of a total simulation as a kind of thought-experiment to establish the merely neurological nature of the mind. OK, there might be all these mechanisms we didn’t understand, and emergent phenomena, and all the rest, but at the end of the day, what if we just simulated everything? Surely then you’d have to admit, we would have made an artificial mind – and what was to stop us, except practicality? It was an unexpected development back in 2005 when someone actually set about making this last-ditch argument a reality. It is unique; I can’t think of another case where someone set out to reproduce a biological process by building a fully detailed simulation, without having any theory of how the thing worked in principle.

This raises some peculiar possibilities. We might put together the full Blue Brain; it might be demonstrably performing like a human brain, controlling a robot which walked around and discussed philosophy with us, yet we still wouldn’t know how it did it. Or, worse perhaps, we might put it all together, see everything working perfectly at a neuronal level, and yet have our attached robot standing slack-jawed or rolling around in a fit, without our being able to tell why.

It may seem unfair to describe Markram and his colleagues as having no theory, but some of his remarks in the article suggest he may be one of those scientists who doesn’t really get the Hard Problem at all.

…It’s the transformation of those cells into experience that’s so hard. Still, Markram insists that it’s not impossible. The first step, he says, will be to decipher the connection between the sensations entering the robotic rat and the flickering voltages of its brain cells. Once that problem is solved—and that’s just a matter of massive correlation—the supercomputer should be able to reverse the process. It should be able to take its map of the cortex and generate a movie of experience, a first person view of reality rooted in the details of the brain…

It could be that Markram merely denies the existence of qualia, a perfectly respectable point of view; but it looks as if he hasn’t really grasped what they are, and that they can’t be captured on any kind of movie. Perhaps this outlook is a natural or even a necessary quality of someone running this kind of project. I suppose we’ll have to wait and see what happens when he gets his 500 petabyte capacity.

iCub iTalk?

Picture: iCub. More about babies – of a sort. You may have seen reports that Plymouth University, with support from many other institutions, has won the opportunity to teach a baby robot to speak. The robot in question is the iCub, and the project is part of Italk, funded by the EU under its Seventh Framework Programme, to the tune of £4.7 million (you can’t help wondering whether it wouldn’t have been value for money to have slipped Steve Grand half a million while they were at it…).

The gist of it seems to be that next year the people at Plymouth will get the iCub to engage in various dull activities like block-stacking (a perennial with both babies and AI) and try to introduce speech communication about the tasks. It is meant to be learning in a way far closer to the typical human experience than anything attempted before. Unfortunately, I haven’t been able to find any clear statement of how they expect the language skills to work, though there is quite a lot of technical detail available about the iCub. This is evidently a pretty splendid piece of kit, although the current model has a mask for a face, which means none of the potent interactive procedures which depend on facial expression, as explored by Cynthia Brezeal and others, will be available. This is a shame, since real babies do use face recognition and smiles to get more feedback out of adults.

In one respect the project has an impeccable background, since Alan Turing, in the famous paper which arguably gave rise to modern Artificial Intelligence, speculated that a thinking robot might have to begin as a baby and be educated. But it seems a tremendously ambitious undertaking. If we are to believe Chomsky, the human language acquisition device is built in – it has to be, since human babies get such poor input, with few corrections and limited samples, yet learn at a fantastic speed. They just don’t get enough information about the language around them to be able to reverse-engineer its rules; so they must in fact simply be customising the setting of their language machine and filling up its vocabulary stores. The structures of real-world languages arguably support this point of view, since the variations in grammar seem to fall within certain limited options, rather than exploiting the full range of logical possibilities. If this is all true, then a robot which doesn’t have a built-in language facility is not going to get very far with talking just by playing with some toys.Of course Chomsky is not, as it were, the only game in town: a more recent school of thought says that by treating language as a formal code, and assuming that babies have to learn the rules before they can work out what people mean, Chomsky puts the cart before the horse; actually, it’s because babies can see what people mean that they can crack the code of grammar so efficiently.

That’s a more congenial point of view for the current project, I imagine, but it raises another question. On this view, babies are not using an innate language module, but they are using an innate ability to understand, to get people’s drift. I don’t think the Plymouth team have worked out a way of building understanding in beforehand (that would be a feat well worth trumpeting in its own right), so is it their expectation that the iCub will acquire understanding through training? Or are their aims somewhat lower?

It seems a key question to me: if they want the robot to understand what it’s saying, they’re aiming high and it would be good to know what the basis for their optimism is (and how they’re going to demonstrate the achievement). If not, if they’re merely aiming for a basic level of performance without worrying about understanding (and the selection of experiments does rather point in that direction), the project seems a bit underwhelming. Would this be any different, fundamentally, from what Terry Winograd was doing, getting on for forty years ago (albeit with SHRDLU, a less charismatic robot)?