Conversation with a Zombie

dialogueTom has written a nice dialogue on the subject of qualia: it’s here.

Could we in fact learn useful lessons from talking to a robot which lacked qualia?

Perhaps not; one view would be that since the robot’s mind presumably works in the same way as ours, it would have similar qualia: or would think it did. We know that David Chalmers’ zombie twin talked and philosophised about its qualia in exactly the same way as the original.

It depends on what you mean by qualia, of course. Some people conceive of qualia as psychological items that add extra significance or force to experience; or as flags that draw attention to something of potential interest. Those play a distinct role in decision making and have an influence on behaviour. If robots were really to behave like us, they would have to have some functional analogue of that kind of qualia, and so we might indeed find that talking to them on the subject was really no better or worse than talking to our fellow human beings.

But those are not real qualia, because they are fully naturalised and effable things, measurable parts of the physical world. Whether you are experiencing the same blue quale as me would, if these flags or intensifiers were qualia, be an entirely measurable and objective question, capable of a clear answer. Real, philosophically interesting qualia are far more slippery than that.

So we might expect that a robot would reproduce the functional, a-consciousness parts of our mind, and leave the phenomenal, p-consciousness ones out. Like Tom’s robot they would presumably be puzzled by references to subjective experience. Perhaps, then, there might be no point in talking to them about it because they would be constitutionally incapable of shedding any light on it. they could tell us what the zombie life is like, but don’t we sort of know that already? They could play the kind of part in a dialogue that Socrates’ easily-bamboozled interlocutors always seemed to do, but that’s about it, presumably?

Or perhaps they would be able to show us, by providing a contrasting example, how and why it is that we come to have these qualia? There’s something distinctly odd about the way qualia are apparently untethered from physical cause and effect, yet only appear in human beings with their complex brains.  Or could it be that they’re everywhere and it’s not that only we have them, it’s more that we’re the only entities that talk about them (or about anything)?

Perhaps talking to a robot would convince us in the end that in fact, we don’t have qualia either: that they are just a confused delusion. One scarier possibility though, is that robots would understand them all too well.

“Oh,” they might say, “Yes, of course we have those. But scanning through the literature it seems to us you humans only have a very limited appreciation of the qualic field. You experience simple local point qualia, but you have no perception of higher-order qualia; the qualia of the surface or the solid, or the complex manifold that seems so evident to us. Gosh, it must be awful…”

Experience and Autonomy

angelanddevilTom Clark has an interesting paper on Experience and Autonomy: Why Consciousness Does and Doesn’t Matter, due to appear as a chapter in Exploring the Illusion of Free Will and Responsibility (if your heart sinks at the idea of discussing free will one more time, don’t despair: this is not the same old stuff).

In essence Clark wants to propose a naturalised conception of free will and responsibility and he seeks to dispel three particular worries about the role of consciousness; that it might be an epiphenomenon, a passenger along for the ride with no real control; that conscious processes are not in charge, but are subject to manipulation and direction by unconscious ones; and that our conception of ourselves as folk-dualist agents, able to step outside the processes of physical causation but still able to intervene in them effectively, is threatened. He makes it clear that he is championing phenomenal consciousness, that is, the consciousness which provides real if private experiences in our minds; not the sort of cognitive rational processing that an unfeeling zombie would do equally well. I think he succeeds in being clear about this, though it’s a bit of a challenge because phenomenal consciousness is typically discussed in the context of perception, while rational decision-making tends to be seen in the context of the ‘easy problem’ – zombies can make the same decisions as us and even give the same rationales. When we talk about phenomenal consciousness being relevant to our decisions, I take it we mean something like our being able to sincerely claim that we ‘thought about’ a given decision in the sense that we had actual experience of relevant thoughts passing through our minds. A zombie twin would make identical claims but the claims would, unknown to the zombie, be false, a rather disturbing idea.

I won’t consider all of Clark’s arguments (which I am generally in sympathy with), but there are a few nice ones which I found thought-provoking. On epiphenomenalism, Clark has a neat manoeuvre. A commonly used example of an epiphenomenon, first proposed by Huxley, is the whistle on a steam locomotive; the boiler, the pistons, and the wheels all play a part in the causal story which culminates in the engine moving down the track; the whistle is there too, but not part of that story. Now discussion has sometimes been handicapped by the existence of two different conceptions of epiphenomenalism; a rigorous one in which there really must be no causal effects at all, and a looser one in which there may be some causal effects but only ones that are irrelevant, subliminal, or otherwise ignorable. I tend towards the rigorous conception myself, and have consequently argued in the past that the whistle on a steam engine is not really a good example. Blowing the whistle lets steam out of the boiler which does have real effects. Typically they may be small, but in principle a long enough blast can stop a train altogether.

But Clark reverses that unexpectedly. He argues that in order to be considered an epiphenomenon an entity has to be the sort of thing that might have had a causal role in the process. So the whistle is a good example; but because consciousness is outside the third-person account of things altogether, it isn’t even a candidate to be an epiphenomenon! Although that inverts my own outlook, I think it’s a pretty neat piece of footwork. If I wanted a come-back I think I would let Clark have his version of epiphenomenalism and define a new kind, x-epiphenomenalism, which doesn’t require an entity to be the kind of thing that could have a causal role; I’d then argue that consciousness being x-epiphenomenal is just as worrying as the old problem. No doubt Clark in turn might come back and argue that all kinds of unworrying things were going to turn out to be x-epiphenomenal on that basis, and so on; however, since I don’t have any great desire to defend epiphenomenalism I won’t even start down that road.

On the second worry Clark gives a sensible response to the issues raised by the research of Libet and others which suggest our decisions are determined internally before they ever enter our consciousness; but I was especially struck by his arguments on the potential influence of unconscious factors which form an important part of his wider case. There is a vast weight of scientific evidence to show that often enough our choices are influenced or even determined by unconscious factors we’re not aware of; Clark gives a few examples but there are many more. Perhaps consciousness is not the chief executive of our minds after all, just the PR department?

Clark nibbles the bullet a bit here, accepting that unconscious influence does happen, but arguing that when we are aware of say, ethnic bias or other factors, we can consciously fight against it and second-guess our unworthier unconscious impulses. I like the idea that it’s when we battle our own primitive inclinations that we become most truly ourselves; but the issues get pretty complicated.

As a side issue, Clark’s examples all suppose that more or less wicked unconscious biases are to be defeated by a more ethical conscious conception of ourself (rather reminiscent of those cartoon disputes between an angel on the character’s right shoulder and a devil on the left); but it ain’t necessarily so. What if my conscious mind rules out on principled but sectarian grounds a marriage to someone I sincerely love with my unconscious inclinations? I’m not clear that the sectarian is to be considered the representative of virtue (or of my essential personal agency) more than the lover.

That’s not the point at all, of course: Clark is not arguing that consciousness is always right, only that it has a genuine role. However, the position is never going to be clear. Suppose I am inclined to vote against candidate N, who has a big nose. I tell myself I should vote for him because it’s the schnozz that is putting me off. Oh no, I tell myself, it’s his policies I don’t like, not his nose at all. Ah, but you would think that, I tell myself, you’re bound to be unaware of the bias, so you need to aim off a bit. How much do \I aim off, though – am I to vote for all big-nosed candidates regardless? Surely I might also have legitimate grounds for disliking them? And does that ‘aiming off’ really give my consciousness a proper role or merely defer to some external set of rules?

Worse yet, as I leave the polling station it suddenly occurs to me that the truth is, the nose had nothing to do with it; I really voted for N because I’m biased in favour of white middle-aged males; my unconscious fabricated the stuff about the nose to give me a plausible cover story while achieving its own ends. Or did it? Because the influences I’m fighting are unconscious, how will I ever know what they really are, and if I don’t know, doesn’t the claimed role of consciousness become merely a matter of faith? It could always turn out that if I really knew what was going on, I’d see my consciousness was having its strings pulled all the time. Consciousness can present a rationale which it claims was effective, but it could do that to begin with; it never knew the rationale was really a mask for unconscious machinations.

The last of the three worries tackled by Clark is not strictly a philosophical or scientific one; we might well say that if people’s folk-dualist ideas are threatened, so much the worse for them. There is, however, some evidence that undiluted materialism does induce what Clark calls a “puppet” outlook in which people’s sense of moral responsibility is weakened and their behaviour worsened. Clark provides rational answers but his views tend to put him in the position of conceding that something has indeed been lost. Consciousness does and doesn’t matter. I don’t think anything worth having can be lost by getting closer to the truth and I don’t think a properly materialist outlook is necessarily morally corrosive – even in a small degree. I think what we’re really lacking for the moment is a sufficiently inspiring, cogent, and understood naturalised ethics to go with our naturalised view of the mind. There’s much to be done on that, but it’s far from hopeless (as I expect Clark might agree).

There’s much more in the paper than I have touched on here; I recommend a look at it.

Interesting Stuff

Picture: correspondent. Tom Clark is developing a representationalist approach to the hard problem and mental causation: see The appearance of reality and Respecting privacy: why consciousness isn’t even epiphenomenal . He borrows from Metzinger but diverges in some important respects, especially in denying the causal role of consciousness in 3rd person explanations of behavior. Tom says he’d welcome feedback.

Roger Penrose, delivering the the second Rabindranath Tagore lecture in Kolkata was surprisingly upbeat about prospects for AI, though sticking to his view that consciousness is not computational and requires some exotic quantum physics. Alas, I can’t find a transcript.

At Google, Dmitriy Genzel is attempting machine translation of poetry. Considering that the translation of poetry is demanding or even impossible for skilful human authors, you could say this was ambitious. His paper(pdf) gives some examples of what has been achieved: there’s also a review in verse.

Finally just a mention for the claim made briefly by Masao Ito that the cerebellum (normally regarded as the part of the brain that does the automatic stuff) may have an important role in high-level cognition. That would be very interesting, but don’t people sometimes have the cerebellum entirely removed? I understood this makes life difficult for them in various ways, but doesn’t seem to affect high-level processes?