Archive for August, 2013

angelanddevilTom Clark has an interesting paper on Experience and Autonomy: Why Consciousness Does and Doesn’t Matter, due to appear as a chapter in Exploring the Illusion of Free Will and Responsibility (if your heart sinks at the idea of discussing free will one more time, don’t despair: this is not the same old stuff).

In essence Clark wants to propose a naturalised conception of free will and responsibility and he seeks to dispel three particular worries about the role of consciousness; that it might be an epiphenomenon, a passenger along for the ride with no real control; that conscious processes are not in charge, but are subject to manipulation and direction by unconscious ones; and that our conception of ourselves as folk-dualist agents, able to step outside the processes of physical causation but still able to intervene in them effectively, is threatened. He makes it clear that he is championing phenomenal consciousness, that is, the consciousness which provides real if private experiences in our minds; not the sort of cognitive rational processing that an unfeeling zombie would do equally well. I think he succeeds in being clear about this, though it’s a bit of a challenge because phenomenal consciousness is typically discussed in the context of perception, while rational decision-making tends to be seen in the context of the ‘easy problem’ – zombies can make the same decisions as us and even give the same rationales. When we talk about phenomenal consciousness being relevant to our decisions, I take it we mean something like our being able to sincerely claim that we ‘thought about’ a given decision in the sense that we had actual experience of relevant thoughts passing through our minds. A zombie twin would make identical claims but the claims would, unknown to the zombie, be false, a rather disturbing idea.

I won’t consider all of Clark’s arguments (which I am generally in sympathy with), but there are a few nice ones which I found thought-provoking. On epiphenomenalism, Clark has a neat manoeuvre. A commonly used example of an epiphenomenon, first proposed by Huxley, is the whistle on a steam locomotive; the boiler, the pistons, and the wheels all play a part in the causal story which culminates in the engine moving down the track; the whistle is there too, but not part of that story. Now discussion has sometimes been handicapped by the existence of two different conceptions of epiphenomenalism; a rigorous one in which there really must be no causal effects at all, and a looser one in which there may be some causal effects but only ones that are irrelevant, subliminal, or otherwise ignorable. I tend towards the rigorous conception myself, and have consequently argued in the past that the whistle on a steam engine is not really a good example. Blowing the whistle lets steam out of the boiler which does have real effects. Typically they may be small, but in principle a long enough blast can stop a train altogether.

But Clark reverses that unexpectedly. He argues that in order to be considered an epiphenomenon an entity has to be the sort of thing that might have had a causal role in the process. So the whistle is a good example; but because consciousness is outside the third-person account of things altogether, it isn’t even a candidate to be an epiphenomenon! Although that inverts my own outlook, I think it’s a pretty neat piece of footwork. If I wanted a come-back I think I would let Clark have his version of epiphenomenalism and define a new kind, x-epiphenomenalism, which doesn’t require an entity to be the kind of thing that could have a causal role; I’d then argue that consciousness being x-epiphenomenal is just as worrying as the old problem. No doubt Clark in turn might come back and argue that all kinds of unworrying things were going to turn out to be x-epiphenomenal on that basis, and so on; however, since I don’t have any great desire to defend epiphenomenalism I won’t even start down that road.

On the second worry Clark gives a sensible response to the issues raised by the research of Libet and others which suggest our decisions are determined internally before they ever enter our consciousness; but I was especially struck by his arguments on the potential influence of unconscious factors which form an important part of his wider case. There is a vast weight of scientific evidence to show that often enough our choices are influenced or even determined by unconscious factors we’re not aware of; Clark gives a few examples but there are many more. Perhaps consciousness is not the chief executive of our minds after all, just the PR department?

Clark nibbles the bullet a bit here, accepting that unconscious influence does happen, but arguing that when we are aware of say, ethnic bias or other factors, we can consciously fight against it and second-guess our unworthier unconscious impulses. I like the idea that it’s when we battle our own primitive inclinations that we become most truly ourselves; but the issues get pretty complicated.

As a side issue, Clark’s examples all suppose that more or less wicked unconscious biases are to be defeated by a more ethical conscious conception of ourself (rather reminiscent of those cartoon disputes between an angel on the character’s right shoulder and a devil on the left); but it ain’t necessarily so. What if my conscious mind rules out on principled but sectarian grounds a marriage to someone I sincerely love with my unconscious inclinations? I’m not clear that the sectarian is to be considered the representative of virtue (or of my essential personal agency) more than the lover.

That’s not the point at all, of course: Clark is not arguing that consciousness is always right, only that it has a genuine role. However, the position is never going to be clear. Suppose I am inclined to vote against candidate N, who has a big nose. I tell myself I should vote for him because it’s the schnozz that is putting me off. Oh no, I tell myself, it’s his policies I don’t like, not his nose at all. Ah, but you would think that, I tell myself, you’re bound to be unaware of the bias, so you need to aim off a bit. How much do \I aim off, though – am I to vote for all big-nosed candidates regardless? Surely I might also have legitimate grounds for disliking them? And does that ‘aiming off’ really give my consciousness a proper role or merely defer to some external set of rules?

Worse yet, as I leave the polling station it suddenly occurs to me that the truth is, the nose had nothing to do with it; I really voted for N because I’m biased in favour of white middle-aged males; my unconscious fabricated the stuff about the nose to give me a plausible cover story while achieving its own ends. Or did it? Because the influences I’m fighting are unconscious, how will I ever know what they really are, and if I don’t know, doesn’t the claimed role of consciousness become merely a matter of faith? It could always turn out that if I really knew what was going on, I’d see my consciousness was having its strings pulled all the time. Consciousness can present a rationale which it claims was effective, but it could do that to begin with; it never knew the rationale was really a mask for unconscious machinations.

The last of the three worries tackled by Clark is not strictly a philosophical or scientific one; we might well say that if people’s folk-dualist ideas are threatened, so much the worse for them. There is, however, some evidence that undiluted materialism does induce what Clark calls a “puppet” outlook in which people’s sense of moral responsibility is weakened and their behaviour worsened. Clark provides rational answers but his views tend to put him in the position of conceding that something has indeed been lost. Consciousness does and doesn’t matter. I don’t think anything worth having can be lost by getting closer to the truth and I don’t think a properly materialist outlook is necessarily morally corrosive – even in a small degree. I think what we’re really lacking for the moment is a sufficiently inspiring, cogent, and understood naturalised ethics to go with our naturalised view of the mind. There’s much to be done on that, but it’s far from hopeless (as I expect Clark might agree).

There’s much more in the paper than I have touched on here; I recommend a look at it.

OutputMassimo Pigliucci issued a spirited counterblast to computationalism recently, which I picked up on MLU. He says that people too often read the Turing-Church hypothesis as if it said that a Universal Turing Machine could do anything that any machine could do. They then take that as a basis on which to help themselves to computationalism. He quotes Jack Copeland as saying that a myth has arisen on the matter, and citing examples where he feels that Dennett and the Copelands have mis-stated the position. Actually, says Pigliucci, Turing merely tells us that a Universal Turing Machine can do anything a specific Turing machine can do, and that does not tell us what real-world machines can or can’t do.

It’s possible some nits are being picked here.  Copeland’s reported view seems a trifle too puritanical in its refusal to look at wider implications; I think Turing himself would have been surprised to hear that his work told us nothing about the potential capacities of real world digital computers. But of course Pigliucci is quite right that it doesn’t establish that the brain is computational. Indeed, Turing’s main point was arguably about the limits of computation, showing that there are problems that cannot be handled computationally. It’s sort of part of our bedrock understanding of computation that there are many non-computable problems; apart from the original halting problem the tiling problem may be the most familiar. Tiling problems are associated with the ingenious work of Roger Penrose, and he, of course, published many years ago now what he claims is a proof that when mathematicians are thinking original mathematical thoughts they are not computing.

So really Pigliucci’s moderate conclusion that computationalism remains an open issue ought to be uncontroversial? Surely no-one really supposes that the debate is over? Strangely enough there does seem to have been a bit of a revival in hard-line computationalism. Pigliucci goes on to look at pancomputationalism, the view that every natural process is instantiating a computation (or even all possible computations. This is rather like the view John Searle once proposed, that a window can be seen as a computer because it has two states, open and closed, which are enough to express a stream of binary digits. I don’t propose to go into that in any detail, except to say I think I broadly agree with Pigliucci that it requires an excessively liberal use of interpretation. In particular, I think in order to interpret everything as a computation, we generally have to allow ourselves to interpret the same physical state of the object as different computational states at different times, and that’s not really legitimate. If I can do that I can interpret myself into being a wizard, because I’m free to interpret my physical self as human at one time, a dragon at another, and a fluffy pink bunny at a third.

But without being pancomputationalists we might wonder why the limits of computation don’t hit us in the face more often. The world is full of non-computable problems, but they rarely seem to give us much difficulty. Why is that? One answer might be in the amusing argument put by Ray Kurzweil in his book How to Create a mind. Kurzweil espouses a doctrine called the “Universality of Computation” which he glosses as ‘the concept that a general-purpose computer can implement any algorithm”. I wonder whether that would attract a look of magisterial disapproval from Jack Copeland? Anyway, Kurzweil describes a non-computable problem known as the ‘busy beaver’ problem. The task here is to work out for a given value of n what the maximum number of ones written by any Turing machine with n states will be. The problem is uncomputable in general because as the computer (a Universal Turing Machine) works through the simulation of all the machines with n states, it runs into some that get stuck in a loop and don’t halt.

So, says Kurzweil, an example of the terrible weakness of computers when set against the human mind? Yet for many values of n it happens that the problem is solvable, and as a matter of fact computers have solved many such particular cases – many more than have actually been solved by unaided human thought! I think Turing would have liked that; it resembles points he made in his famous 1950 essay on Computing Machinery and Intelligence.

Standing aside from the fray a little, the thing that really strikes me is that the argument seems such a blast from the past. This kind of thing was chewed over with great energy twenty or even thirty years ago, and in some respects it doesn’t seem as important as it used to. I doubt whether consciousness is purely computational, but it may well be subserved, or be capable of being subserved, by computational processes in important ways. When we finally get an artificial consciousness, it wouldn’t surprise me if the heavy lifting is done by computational modules which either relate in a non-computational way or rely on non-computational processing, perhaps in pattern recognition, though Kurzweil would surely hate the idea that that key process might not be computed. I doubt whether the proud inventor on that happy day will be very concerned with the question of whether his machine is computational or not.