Scott’s Aliens return

Scott Bakker’s alien consciousnesses are back, and this time it’s peer-reviewed.  We talked about their earlier appearance in the Three Pound Brain a while ago, and now a paper in the JCS sets out a new version.

The new paper foregrounds the idea of using hypothetical aliens as a forensic tool for going after the truth about our own minds; perhaps we might call it xenophenomenology. That opens up a large speculative space, though it’s one which is largely closed down again here by the accompanying assumption that our aliens are humanoid, the product of convergent evolution. In fact, they are now called Convergians, instead of the Thespians of the earlier version.

In a way, this is a shame. On the one hand, one can argue that to do xenophenomenology properly is impractical; it involves consideration of every conceivable form of intelligence, which in turn requires an heroic if not god-like imaginative power which few can aspire to (and which would leave the rest of us struggling to comprehend the titanic ontologies involved anyway). But if we could show that any possible mind would have to be x, we should have a pretty strong case for xism about human beings. In the present case not much is said about the detailed nature of the Convergian convergence, and we’re pretty much left to assume that they are the same as us in every important respect. This means there can be no final reveal in which – aha! – it turns out that all this is true of humans too! Instead it’s pretty clear that we’re effectively talking about humans all along.

Of course, there’s not much doubt about the conclusion we’re heading to here, either: in effect the Blind Brain Theory (BBT). Scott argues that as products of evolution our minds are designed to deliver survival in the most efficient way possible. As a result they make do with a mere trickle of data and apply cunning heuristics that provide a model of the world which is quick and practical but misleading in certain important respects. In particular, our minds are unsuited to metacognition – thinking about thinking – and when we do apply our minds to themselves the darkness of those old heuristics breeds monsters: our sense of our selves as real, conscious agents and the hard problems of consciousness.

This seems to put Scott in a particular bind so far as xenophenomenology is concerned. The xenophenomenological strategy requires us to consider objectively what alien minds might be like; but Scott’s theory tells us we are radically incapable of doing so. If we are presented with any intelligent being, on his view those same old heuristics will kick in and tell us that the aliens are people who think much like us. This means his conclusion that Convergians would surely suffer the same mental limitations as us appears as merely another product of faulty heuristics, and the assumed truth of his conclusion undercuts the value of his evidence.

Are those heuristics really that dominant? It is undoubtedly true that through evolution the brains of mammals and other creatures took some short cuts, and quite a few survive into human cognition, including some we’re not generally aware of. That seems to short-change the human mind a bit though; in a way the whole point of it is that it isn’t the prisoner of instinct and habit. When evolution came up with the human brain, it took a sort of gamble; instead of equipping it with good fixed routines, it set it free to come up with new ones, and even over-ride old instincts. That gamble paid off, of course, and it leaves us uniquely able to identify and overcome our own limitations.

If it were true that our view of human conscious identity were built in by the quirks of our heuristics, surely those views would be universal, but they don’t seem to be. Scott suggests that, for example, the two realms of sky and earth naturally give rise to a sort of dualism, and the lack of visible detail in the distant heavens predisposes Convergians (or us) to see it as pure and spiritual. I don’t know about that as a generalisation across human cultures (didn’t the Greeks, for one thing, have three main realms, with the sea as the third?). More to the point, it’s not clear to me that modern western ways of framing the problems of the human mind are universal. Ancient Egyptians divided personhood into several souls, not just one. I’ve been told that in Hindu thought the question of dualism simply never arises. In Shinto the line between the living and the material is not drawn in quite the Western way. In Buddhism human consciousness and personhood have been taken to be illusions for many centuries. Even in the West, I don’t think the concept of consciousness as we now debate it goes back very far at all – probably no earlier than the nineteenth century, with a real boost in the mid-twentieth (in Italian and French I believe one word has to do duty for both ‘consciousness’ and ‘conscience’, although we mustn’t read too much into that). If our heuristics condemn us to seeing our own conscious existence in a particular way, I wouldn’t have expected that much variation.

Of course there’s a difference between what vividly seems true and what careful science tells us is true; indeed if the latter didn’t reveal the limitations of our original ideas this whole discussion would be impossible. I don’t think Scott would disagree about that; and his claim that our cognitive limitations have influenced the way we understand things is entirely plausible. The question is whether that’s all there is to the problems of consciousness.

As Scott mentions here, we don’t just suffer misleading perceptions when thinking of ourselves; we also have dodgy and approximate impressions of physics. But those misperceptions were not Hard problems; no-one had ever really doubted that heavier things fell faster, for example. Galileo sorted several of these basic misperceptions out simply by being a better observer than anyone previously, and paying more careful attention. We’ve been paying careful attention to consciousness for some time now, and arguably it just gets worse.

In fairness that might rather short-change Scott’s detailed hypothesising about how the appearance of deep mystery might arise for Convergians; those, I think, are the places where xenophenomenology comes close to fulfilling its potential.

 

Crimbots

crimbotSome serious moral dialogue about robots recently. Eric Schwitzgebel put forward the idea that we might have special duties in respect of robots, on the model of the duties a parent owes to children, an idea embodied in a story he wrote with Scott Bakker. He followed up with two arguments for robot rights; first, the claim that there is no relevant difference between humans and AIs, second, a Bostromic argument that we could all be sims, and if we are, then again, we’re not different from AIs.

Scott has followed up with a characteristically subtle and bleak case for the idea that we’ll be unable to cope with the whole issue anyway. Our cognitive capacities, designed for shallow information environments, are not even up to understanding ourselves properly; the advent of a whole host of new styles of cognition will radically overwhelm them. It might well be that the revelation of how threadbare our own cognition really is will be a kind of poison pill for philosophy (a well-deserved one on this account, I suppose).

I think it’s a slight mistake to suppose that morality confers a special grade of duty in respect of children. It’s more that parents want to favour their children, and our moral codes are constructed to accommodate that. It’s true society allocates responsibility for children to their parents, but that’s essentially a pragmatic matter rather than a directly moral one. In wartime Britain the state was happy to make random strangers responsible for evacuees, while those who put the interests of society above their own offspring, like Brutus (the original one, not the Caesar stabber) have sometimes been celebrated for it.

What I want to do though, is take up the challenge of showing why robots are indeed relevantly different to human beings, and not moral agents. I’m addressing only one kind of robot, the kind whose mind is provided by the running of a program on a digital computer (I know, John Searle would be turning in his grave if he wasn’t still alive, but bear with me). I will offer two related points, and the first is that such robots suffer grave problems over identity. They don’t really have personal identity, and without that they can’t be moral agents.

Suppose Crimbot 1 has done a bad thing; we power him down, download his current state, wipe the memory in his original head, and upload him into a fresh robot body of identical design.

“Oops, I confess!” he says. Do we hold him responsible; do we punish him? Surely the transfer to a new body makes no difference? It must be the program state that carries the responsibility; we surely wouldn’t punish the body that committed the crime. It’s now running the Saintbot program, which never did anything wrong.

But then neither did the copy of Crimbot 1 software which is now running in a different body – because it’s a copy, not the original. We could upload as many copies of that as we wanted; would they all deserve punishment for something only one robot actually did?

Maybe we would fall back on the idea that for moral responsibility it has to be the same copy in the same body? By downloading and wiping we destroyed the person who was guilty and merely created an innocent copy? Crimbot 1 in the new body smirks at that idea.

Suppose we had uploaded the copy back into the same body? Crimbot 1 is now identical, program and body, the same as if we had merely switched him off for a minute. Does the brief interval when his data registers had different values make such a moral difference? What if he downloaded himself to an internal store, so that those values were always kept within the original body? What if he does that routinely every three seconds? Does that mean he is no longer responsible for anything, (unless we catch him really quickly) while a version that doesn’t do the regular transfer of values can be punished?

We could have Crimbot 2 and Crimbot 3; 2 downloads himself to internal data storage every second and the immediately uploads himself again. 3 merely pauses every second for the length of time that operation takes. Their behaviour is identical, the reasons for it are identical; how can we say that 2 is innocent while 3 is guilty?

But then, as the second point, surely none of them is guilty of anything? Whatever may be true of human beings, we know for sure that Crimbot 1 had no choice over what to do; his behaviour was absolutely determined by the program. If we copy him into another body, and set him uip wioth the same circumstances, he’ll do the same things. We might as well punish him in advance; all copies of the Crimbot program deserve punishment because the only thing that prevented them from committing the crime would be circumstances.

Now, we might accept all that and suggest that the same problems apply to human beings. If you downloaded and uploaded us, you could create the same issues; if we knew enough about ourselves our behaviour would be fully predictable too!

The difference is that in Crimbot the distinction between program and body is clear because he is an artefact, and he has been designed to work in certain ways. We were not designed, and we do not come in the form of a neat layer of software which can be peeled off the hardware. The human brain is unbelievably detailed, and no part of it is irrelevant. The position of a single molecule in a neuron, or even in the supporting astrocytes, may make the difference between firing and not firing, and one neuron firing can be decisive in our behaviour. Whereas Crimbot’s behaviour comes from a limited set of carefully designed functional properties, ours comes from the minute specifics of who we are. Crimbot embodies an abstraction, he’s actually designed to conform as closely as possible to design and program specs; we’re unresolvably particular and specific.

Couldn’t that, or something like that, be the relevant difference?