|
Fig. 23: a basic
stance
(intermediate)
|
Dennett is the great demystifier of
consciousness. According to him there is,
in the final analysis,
nothing fundamentally inexplicable about
the way we attribute intentions and
conscious feelings to people. We often
attribute feelings or intentions
metaphorically to non-human things, after
all. We might say our car is a bit tired
today, or that our pot plant is thirsty.
At the end of the day, our attitude to
other human beings is just a version - a
much more sophisticated version - of the
same strategy. Attributing intentions to
human animals makes it much easier to work
out what their behaviour is likely to
be. It pays us, in short, to adopt the
intentional stance when trying to
understand human beings.
This isn't the only
example of such a stance, of course. A slightly simpler example is
the special 'design stance' we adopt towards machines when we try to
understand how they work (that is, by assuming that they do
something useful which can be guessed from their design and
construction). An axe is just a lump of wood and iron, but we
naturally ask ourselves what it could be for, and the answer
(chopping) is evident. A third stance is the basic physical one we
adopt when we try to predict how something will behave just by
regarding it as a physical object and applying the laws of physics
to it.
It's instructive to notice that
when we adopt the design stance towards an axe, we don't assume that
the axe is magically imbued with spiritual axehood: but at the same
time its axehood is uncontroversially a fact. If we only understood
things this way all the time, we should find the real nature of
people and thoughts no more worrying than the real nature of axes.
One day there could well
be machines which fully justify our adopting the intentional
stance towards them and hence treating them like human beings. With
some machines, some of the time, and up to a point, we do this
already, (think of computer chess) but Dennett would not predict the
arrival of a robot with full human-style consciousness for a while
yet.
So it's all a matter of
explanatory stances. But doesn't that mean that people are not
'real', just imaginary constructions? Well, are centres of gravity
real? We know that forces really act on every part of a given
body, but it makes it much easier, and no less accurate, if our
calculations focus on a single average point. People are a bit like
that. There are a whole range of separate processes going on in the
relevant areas of your brain at any one time - producing a lot of
competing 'multiple drafts' of what you might think, or say. Your
actual thoughts or speech emerge from this competition between rival
versions - a kind of survival of the fittest, if you like. The
intentional stance helps us work out what the overall result will
be.
|
|
The 'overall result'? But it's not as if the different
versions get averaged out, is it? I thought with the multiple drafts
idea one draft always won at the expense of all the others. That's
one of the weaknesses of the idea - if one 'agent' can do the
drafting on its own, why would you have several?
|
|
It's just
more effective to have several competing drafts on the go, and
then pick the best. It's a selective process, comparable in
some respects to evolution - or a form of parallel processing, if
you like.
|
|
'Pick
the best'? I don't see how it can be the best in the sense of being
the most cogent or useful thought or utterance - it's just the one
that grabs control. The only way you could guarantee it was the best
would be to have some function judging the candidates. But that
would be the kind of central control which the theory of multiple
drafts is supposed to do away with. Moreover, if there is a way of
judging good results, there surely ought to be a way of
generating only good ones to begin with - hence again no need for
the wasteful multiple process. I'm always suspicious when somebody
invokes
'parallel
processing'
. At the end of
the day, I think you're forced to assume some kind of unified
controlling process.
|
|
Absolutely not- and this is a key point
of Dennett's theory. None of this means there's a fixed point in the
brain where the drafts are adjudicated and the thinking gets done.
One of the most seductive delusions about consciousness is that
somewhere there is a place where a picture of the world is displayed
for a 'control centre' to deal with - the myth of the 'Cartesian
Theatre'. There is no such privileged place; no magic homunculus who turns inputs into outputs. I realise
that thinking in terms of a control centre is a habit it's hard to
break, but it's an error you have to put aside if we're ever going
to get anywhere with consciousness.
Another pervasive
error, while we're on the subject, is the doctrine of 'qualia' - the private, incommunicable
redness of red or indescribable taste of a
particular wine. Qualia are meant to be the
part of an experience which is left over if
you subtract all the objective bits. When
you look at something blue, for example,
you acquire the information that it
is blue: but you also, say the
qualophiles, see blue.
That blue you really see is an example of qualia, and who knows,
they ask, whether the blue qualia you personally experience are
the same as those which impinge on someone else?
Now qualia
cannot have any causal effects (otherwise
we should be able to find objective ways of
signalling to each other which quale we
meant). This has the absurd consequence
that any words written or spoken about them
were not, in fact, caused by the qualia
themselves. There has been a long and
wearisome series of philosophical papers
about inverted spectra,
zombies, hypothetical twin worlds and the
like which purport to prove the existence
of qualia. For many people, this first
person, subjective, qualia-ridden
experience is what consciousness is all
about; the mysterious reason why
computers can never deserve to be regarded
as conscious. But, Dennett says, let's be
clear: there are no such things as
qualia.
There's nothing in the process of perception which is ultimately
mysterious or outside the normal causal system. When I stand in
front of a display of apples, every last little scintilla of subtle
redness is capable of influencing my choice of which one to pick up.
|
|
It's
easy to deny qualia if you want to. In effect you just refuse to
talk about them. But it's a bit sad. Qualia are the really
interesting, essential part of consciousness: the bit that really
matters. Dennett says we'll be alright if we stick to the
third-person point of view (talking about how other people's minds
work, rather than talking about our own); but it's our own,
first-person sensations and experiences that hold the real mystery,
and it's a shame that Dennett should deny himself the challenge of
working on them.
|
|
I grant you qualia are grist to the mill
of academic philosophers - but that's never
been any sign that an issue was actually
real, valid, or even interesting. But in
any case, Dennett hasn't excluded himself
from anything. He proposes that instead of
mystifying ourselves with phenomenology we
adopt a third-person version -
heterophenomenology. In other words,
instead of trying to talk about our ineffable inner experiences, we
should talk about what people report as being their ineffable inner
experiences. When you think about it, this is really all we can do
in any case.
That's Dennett in
a nutshell. Actually, it isn't possible to summarise him that
compactly: one of his great virtues is his wide
range. He covers more aspects of these problems than most
and manages to say interesting things about all of them. Take
the frame problem - the difficulty computer programs have in dealing
with teeming reality and the 'combinatorial explosion' which
results. This is a strong argument against Dennett's
computation-friendly views: yet the best philosophical exposition of
the problem is actually by Dennett himself.
|
|
Mm. If you ask me, he's a
bit too eager to
cover lots of different ideas. In 'Consciousness Explained' he can't
resist bringing in
memes as well as the intentional stance, though it's
far from clear to me that the two are compatible. Surely one
theory at a time is enough, isn't it? Even Putnam disavows his old
theory when he adopts a new one.
|
|
It seems to me
that a complete account of consciousness is
going to need more than one theoretical
insight. Denett's broad range means he's
said useful things on a broader range of
topics than anyone else. Even if you don't
agree with him, you must admit that that
sceptical view about qualia, for
example, desperately needed articulating.
And it typifies the other thing I like
about Dennett. He's readable, clear, and
original, but above all he really seems as
if he wants to know the truth, whereas most
of the philosophers seem to enjoy
elaborating the discussion far more than
they enjoy resolving it. His theory may
seem strange at first, but after a while I
think it starts to seem like common sense.
Take the analogy with centres of gravity.
People must be something like this in the
final analysis, mustn't they? On the one
hand we're told the self is a mysterious
spiritual entity which will always be
beyond our understanding: on the other
side, some people tell us paradoxically
that the self is an illusion. I don't think
either of these positions is easy to
believe: by contrast, the idea of the self
as a centre of narrative gravity just seems
so sensible, once you've got used to it.
|
|
The problem is, it's blindingly obvious
that whether something is conscious or not
doesn't depend on our stance towards it.
Dennett realises, of course, that we can't
make a bookshelf conscious just by giving
it a funny look, but the required theory of
what makes something a suitable target
for the stance (which is
really the whole point) never gets satisfactorily resolved in
my view, in spite of some talk about 'optimality'.
And that business
about centres of gravity. A centre of gravity acts as a kind of
average for forces which actually act on millions of different
points. Well there really are people like that - legal
'persons', the contractual entitities who provide a vehicle for the
corporate will of partnerships, companies, groups of hundreds of
shareholders and the like. But surely it's obvious that these legal
fictions, which we can create or dispell arbitrarily whenever we
like, are entirely different to the real people who invented them,
and on whom, of course, they absolutely depend.
The fact is,
Dennett's view remains covertly dependent on the very same intuitive
understanding of consciousness it's meant to have superseded. You
can imagine a disciple running into problems like this...
Disciple:
|
Dan, I've absorbed and internalised your theory and at
last I really understand and believe it fully. But
recently I've been having a difficulty.
|
Dennett:
|
What's that?
|
Disciple:
|
Well, I can't seem to adopt the intentional stance any
more.
|
Dennett:
|
Wow. It's really very simple. Deep breaths
now. Look at the target (use me if you like). Now
just attribute to me some plausible conscious states and
intentions.
|
Disciple:
|
But... What would that be like?
What are conscious states? For
you to have conscious states just
means
I can usefully deal with you as if you had ... conscious
states. I seem to be caught in a kind of vicious circle unless
I just somehow know what conscious states are...
|
Dennett:
|
Steady now. Just think, what would I be likely to
do if I had the kind of real, original intentions which
people talk about? How would things with intentions
behave?
|
Disciple:
|
I
have no idea. There are no things with real intentions.
I'm not even sure any more what 'real intentions' means...
|
|
|
Yes, very
amusing I'm sure. I suppose I can sympathise with you to some
extent. Grasping Dennett's ideas involves giving up a lot of
cherished and ingrained notions, and I'm afraid you're just not
ready (or perhaps able) to make the effort. But the suggestion
that Dennett doesn't tell us what makes something a good target for
the intentional stance is a shocking misrepresentation. It could
hardly be more explicit. Anything which implements a 'Joycean
machine' is conscious. This Joycean machine is the thing, the
program if you like, which produces the multiple drafts. The idea is
that consciousness arises when we turn on ourselves the mechanisms
and processes we use to recognise and understand other people.
Crudely put, consciousness is a process of talking to ourselves
about ourselves: and it's that that makes us susceptible to
explanation through the intentional stance. It's all perfectly
clear.
You obviously
haven't grasped the point about optimality,
either. Suppose you're playing chess. How
do you guess what the other player is
likely to do? The only safe thing to do is
to assume he will make the best possible
move, the optimal move. In effect,
you attribute to him the desire to win
and the intention of out-playing you, and
that helps dramatically in the task of
deciding which pieces he is likely to move.
Intentional systems, entities which
display this kind of complex optimality, deserve to
be regarded as conscious to that extent.
|
|
Yes, yes, I understand. But how do you
know what behaviour is optimal? Things
can't just be inherently optimal:
they're only optimal in the light of a
given desire or plan. In the case of a game
of chess, we take it for granted that
someone just wants to win (though it ain't
necessarily so): but in real-life contexts
it's much more difficult. Attributing
desires and beliefs to people
arbitrarily won't help us predict their
behaviour. Our ability to get the right ones depends on an in-built
understanding of consciousness which Dennett does not explain.
In fact it springs from empathy: we imagine the beliefs and
desires we would have in their place. If we hadn't got real beliefs
and desires ourselves, the whole stance business wouldn't work.
|
|
It isn't
empathy we rely on - at least, not what you mean by empathy. The
process of evolution has fitted out human beings with similar basic
sets of desires (primarily, to survive and reproduce) which can be
taken for granted and used as the basis for deductions about
behaviour. I don't by any means suggest the process is simple or
foolproof (predicting human behaviour is often virtually
impossible) just that treating people as having conscious
desires and beliefs is a good predictive strategy. As a matter of
fact, even attributing incorrect desires and beliefs would help us
falsify some hypotheses more efficiently than trying to predict
behaviour from brute physical calculation.
Speaking of
evolution, it ocurs to me that a wider perspective might help you
see the point. Dennett's views can be seen as carrying on a
long-term project of which the theory of evolution formed an
important part. This is the gradual elimination of teleology from
science. In primitive science, almost everything was explained by
attributing consciousness or purpose to things: the sun rose because
it wanted to, plants grew in order to provide shade and food, and so
on. Gradually these explanations have been replaced by better, more
mechanical ones. Evolution was a huge step forward in this
process, since it meant we could explain how animals had developed
without the need to assume that conscious design was part of the
process. Dennett's work takes that kind of thinking into the mind
itself.
|
|
Yes,
but absurdly! It was fine to eliminate
conscious purposes from places where they
had no business, but to eliminate them from
the one place where they certainly do
exist, the mind, is perverse. It's as
though someone were to say, well, you know,
we used to believe the planets moved
because they were gods; then we came to
realise they weren't themselves
conscious beings, but we still believed
they were moved by angels. After a while,
we learnt how to do without the angels: now
it's time to take the final step and admit
that, actually, the planets don't move. That would be no more
absurd that Dennett's view that, as he put it, 'we are all
zombies'.
|
|
A palpably false analogy:
and as for the remark about zombies, it is an
act of desperate intellectual dishonesty to quote
that assertion out of context!
|
|
Read:
|
"Consciousness
Explained"
Perhaps the
single most popular and important book in the field. The main
statement of Dennett's position on consciousness - an
essential text which manages to be highly readable without
over-simplifying the issues.
|
"The Intentional Stance"
A collection of lectures which rounds out
the account - slightly drier but still quite easy to
read.
|
"Brainstorms"
An excellent collection of essays,
ranging more widely. Some of these, notably one on
anaesthesia, deserve to be more widely read.
|
"Cognitive
Wheels: the frame problem of
AI" Included in
several different collections - we
recommend 'The Philosophy of
Artificial Intelligence' (ed.
Margaret A Boden) which has other key
papers. The 'frame problem' has been
interpreted in different ways, and
this is a variation on the way it was
originally conceived, but it is such
a lucid and interesting account that
one is tempted to say, whatever the
frame problem was before,
this is what it is now...
|
"Kinds of
Minds" A shorter statement
of the Dennettian outlook - probably
best seen as an alternative
to "Consciousness Explained" rather than a supplement.
|
"Freedom
evolves"
Dennett's
sets out the ethical aspect of his views in this account of
free will in a Dennettian world. It shows an impressively
consistent outlook, but lacks the originality and
conviction of his other work. The idea that people and
beliefs are convenient social artefacts is startling and
challenging: the idea that morality is a convenient social
artefact is last week's cold mashed potato, philosophically
speaking. There's still a lot of interesting stuff
and the book is a necessary part of the Dennettian
weltanschauung.
|
Some Links:
|
Home
page
|
Interview in
the Guardian
- and
corrections
.
|
Interview in the
Atlantic
|
The
Dualist -
searching philosophical Q&A
|
John Sutton's
links - Large
collection of links to all kinds of on-line material.
|
General
Links
|
|
|