Three and a half problems.   
Linespace

home

Philosophical discussions of consciousness now tend to centre on two things which (for the time being, at least) set conscious humans apart from 'mere' computers. First, real subjective sensations (qualia); second, in various forms, real understanding or meaning (intentionality). Whether or not they are ultimately attainable by computers, both remain profoundly mysterious even in their human form.

Linespace 

Besides these two dominating issues there is a third problem, namely that of moral responsibility . A robotic person fully equivalent to a human being would not only have to have real experiences and real understanding, but real moral responsibility for its own actions. It is perhaps less obvious that this third, ethical problem has anything to do with consciousness (and this may be one reason why it received less attention in this context than the other two) but there is certainly some link. Surely we can only be held responsible for those acts we have consciously chosen?   

Linespace

As if that weren’t enough there is another thing ‘computers can’t do’, namely perceive relevance. Unless a computer has been explicitly programmed to look out for a particular kind of obstacle, it won't do so: but since the list of potential obstacles to any real-world project goes on forever, they can't all be dealt with by explicit programming. In the form of the ‘frame problem’, this issue has been invoked in support of  the non-computationalist case. The same underlying problem appears in a linguistic guise as the problem of how human beings pick out the implications of sentences with remarkably few clues (‘I’m leaving.’ ‘Who is she?’). More broadly, it may explain why computers can't deal reliably with meaning - because any given sentence can imply an indefinite number of different things in different contexts.

Linespace

Given the difficulties raised for AI, there is clearly some mystery about how human beings deal with relevance. It seems  to us that our ability to spot relevance is connected with our more general ability to recognise things, and hence with conscious awareness. There is a clear link, in any case, between the issue of relevance and the problem of intentionality: both involve, in a very loose sense, seeing the significance of things. We may reasonably hope, therefore, that once we have solved the problem of intentionality, we shall have at least half the answer to the problem of relevance.

By my count then, there are three and a half problems of consciousness:

  • Qualia
  • Intentionality
  • Moral responsibility, and
  • Relevance
Linkages home
Besides the links already suggested there are some other connections between these issues. We are morally responsible for our actions in part, at least, because we intend or mean them to take place, and because we are capable of thinking about the consequences. Moral responsibility, in short, depends on intentionality.

What about qualia? The links are less clear here, but it's possible (as Searle suggests) that intentionality derives from qualia: that, for example, the feeling of hunger is inherently about food, and that it is in this kind of ‘aboutness’ that the roots of all intentionality are to be found.

Putting all this together, we might provisionally set out the relationship between the issues as being like this…

Intentionality, perhaps, flows from qualia, while responsibility and relevance flow, in different ways, from intentionality.

Qualia

home

When we see a red word on a page, our brain acquires all sorts of data about the wavelength of the light, the shape and size of the letters, and so on. But there is more to it than that: we also have an experience of redness, and this experience is over and above the mere data-gathering, which a computer could do equally well. This experienced red, along with experienced blue, cold, noise, bitterness, and so on are qualia, and it is very hard to give a fully satisfactory account of them. In particular it is hard to find any satisfactory place for them in the account of the world provided by physics. They are 'raw feels', we know they exist because 'there is something it is like' to see red, smell grass, and so on.

Linespace

The problem of qualia has been at the forefront of recent discussions of consciousness, but somehow it nevertheless lacks the established respectability of some other philosophical issues: it is not one of the old philosophical chestnuts chewed over since the time of Socrates. Perhaps it only began to seem a mystery in its own right once a mechanical explanation of the general workings of the mind seemed possible, though whether this perception of mystery represents enlightenment or bedazzlement remains an open question. It is certainly an exceptionally slippery issue, and most of the literature on the subject is about establishing the existence and nature of the problem, rather than solving it.  

Linespace

The arguments for the reality of qualia take many forms and some have become very convoluted. A clear exposition of five is given by Chalmers. All the arguments, however, depend on the basic intuition that there is something involved in subjective experience over and above the simple mechanical, physical story, something which could in principle be taken away without affecting the course of that story at all. 

Linespace

There are three main ways to go on qualia. The first is the Dennettian path of scepticism: there are no qualia, the whole thing is a category mistake, or some other confusion or delusion. This approach avoids an immense number of problems, but it rides roughshod over the very powerful intuitive conviction that there is something more to seeing a red object than just acquiring the knowledge that it is, in fact, red. Instead of having to explain qualia themselves, we have to explain just why so many people find their existence undeniable .

Linespace

The second course is to start explaining the physical process of perception in the brain and hope that somewhere along the line it will somehow amount to, or include an explanation of qualia. This approach is particularly appealing to scientists and engineers. Let's just build the robot; if we succeed we may have picked up our explanation of qualia along the way, and if we succeed without the explanation, maybe it doesn't matter anyway; the philosophers never explained how this stuff works in human beings, either, did they? But building the robot turns out to be more difficult than it seemed; and our purely physical theories of the brain, interesting as they may be, don't seem to provide the answer. Either we end up with a purely neurological theory which, whatever may be claimed for it, does not touch on the problem of qualia at all; or we end up reducing qualia to a special kind of flag or label which plays some role in an ordinary computational physical process. There might be such flags, and they might be well worth studying, but they aren't, in the original sense, qualia, and nothing we might say about them can possibly resolve the original problem.  

Linespace

The third path is to accept the full-blooded version of qualia: but this involves insoluble problems. Qualia aren't part of the normal physical process of cause and effect, but speaking and writing are: it follows, bizarrely, that nothing we may say or write about qualia can actually have been caused by them, or by our experience of them. Quite how bad this problem is depends on one's views about reference and causality, but it is very bad even in the best case, and in attempting to deal with it Qualians are driven towards hopeless philosophical positions such as dualism and epiphenomenalism. So far as I can see, there is no-one who accepts the full reality of qualia and who even claims to be able to clear up the mystery completely.

Intentionality

home
Intentionality is 'aboutness': words, pictures, signs, beliefs, desires, and, indeed, intentions are all about things. The use of the word in this sense goes back to the Middle Ages, and Thomas Aquinas in particular; the topic was revived by Brentano, who thought that intentionality was so obviously inexplicable in physical terms that it proved the existence of a non-material world. But although intentionality is hard to explain, it isn't really an esoteric matter. Indeed, one of the things which make it difficult to deal with is its very familiarity; our normal life is so saturated with meanings and intentions and interpretations that we find it difficult to step back far enough to focus on the subject. There is also, as a result, a constant danger of including words or ideas which presuppose intentionality in the theories which are meant to explain it.

Linespace

The problem of intentionality crops up under several different names: the problem of meaning is clearly another form of the same issue, for example, and so is the question of semantics, the thing which, according to Searle and others, you cannot get from mere syntax, the kind of formal manipulation which computers are competent to deal with. It arises as a practical issue in the field of computer translation: some simple translations can be performed by just exchanging, say, English words for equivalent French ones and observing a few structural rules: but many translations are difficult to carry out accurately without invoking the meaning of the words. 

Simplifying things a bit, we can identify three main challenges which arise from the underlying problem. One, daunting enough in itself, is simply how to give any account at all of what it is for something to be about something else. Another is explaining the way we can talk about things that don't exist, or even about things that couldn't exist, and the way these remote or unreal things somehow have an influence on our real, present speech and behaviour. A third is how we deal with the strange logical properties of intentionality.  

Linespace

These logical issues require some explanation. Normally, if we substitute a synonym for one of the names in a true sentence, the sentence remains true. If we take the sentence "Alexander's tutor was Aristotle" and happen to know that "Aristotle was a man from Stagira", we can make the switch and get "Alexander's tutor was a man from Stagira", which is also true. But if intentionality comes into the picture, this ceases to be a reliable procedure. If we take "I thought the figure in the school library represented Aristotle" and make the same switch, we get "I thought the figure in the school library represented a man from Stagira", which could have been true, but actually isn't because at the time, I had never heard of Stagira. The intractable nature of problems of this kind led Quine to go so far as to recommend that intentional 'idioms' should be excluded from science and philosophy wholesale: we should just stop attributing intentions and beliefs to people altogether (though he recognised that for everyday purposes the habit would be unbreakable).

Linespace

However, although the problems are difficult, there are at least some theories on offer. In considering them, we run the risk of being drawn out into the deeper waters of mainstream philosophy, but we would single out four views as specially relevant:

  • information-theoretic explanations;
  • Searlian biological grounding;
  • the Dennettian stance; and
  • Gricean non-natural meaning.

Linespace

Some look towards an analysis of intentionality based on an information-theory perspective. Certain events leave an impression, like the shadow on a wall or the indentation in a cushion, and these impressions can fairly be regarded as about the thing that caused them. Without some further restriction, this view seems unmanageable, with everything covered in unrecognisable information about virtually everything else, but Dretske, for example, offers a more developed version in which the impression is only about the thing that caused it if that earlier cause can be reconstructed from the impression alone. There's no denying the hard-headed appeal of such a view, but it isn't particularly good at explaining the stranger features of intentionality: how do we come to think about imaginary things if thoughts are just impressions left on us by passing objects?

Linespace

Searle has an extensive theory of intentionality in which intentional utterances have conditions of satisfaction and direction of fit - for beliefs the conditions of fit go one way, so that it's up to the utterance to match the world; in the case of intentions they go the other way, with the world either matching or failing to match the utterance. But at the heart of the system an element of mystery remains, to do with the contribution of consciousness. Searle is in the position of someone with a theory of locomotives which deals extensively with coal, water, axles, flanged wheels, and so on, who acknowledges that in the final analysis he doesn't know what makes the engines move (but it's something to do with the properties of water...). On the Searlian view, intentionality has something to do with special properties of biological matter, and it seems, with subjective states in particular: he mentions examples such as hunger, which he says just is about food. There is something unsatisfactory in this 'just being about' : the more so because Searle commends neurological research (and Edelman's programme in particular) as the best way forward. It's hard to see how further data about neurons, however interesting, is going to turn into an explanation of inherent meaningfulness. Nevertheless, Searle's views do capture a widely-felt intuition about the nature of consciousness, perhaps the same one which leads people to consider embodiment  a key concept.

Linespace

Not all forms of intentionality are inherent, anyway. According to Searle (and most other people), we can draw a valid distinction between two kinds of intentionality: real on the one hand and derived, or 'as-if' intentionality on the other. Some things - words on a page, signs on the road - have meanings only because they have been given to them. We have decided to use a certain shape to mean a certain sound, or to interpret a particular picture as indicating a certain hazard; in fact, in most cases we have learnt a conventional meaning which was decided on by others long before. But our own thoughts don't get their meanings from any convention, or from our decision to interpret them a particular way: if I think about falling rocks, my thought manages to be about falling rocks without outside help. This is real intentionality, from which the intentionality of signs is derived .

This is one of the main bones of contention between Searle and Dennett. It seems obvious that the meaningfulness of your own thoughts is quite different in kind from the meaningfulness of a fictional character's thoughts: but to Dennett the difference is merely that your own thoughts are part of a more complex and autonomous process: your story may be one you are telling yourself (or I suppose we should say, it may be a self-telling story) but it's still, in the end, a story, not a mysterious source of inherent meaning.  

Linespace

The Dennettian view typically represents intentionality as something we attribute to other people because it helps us predict the outcome of the immensely complex processes that govern their behaviour. Dennett reaches this position by considering the simpler explanatory strategies we apply to objects and machines, but you might reach a very similar position by considering meaning as a matter of social convention. The approach is plausible for other people (the 'third-person' view) but not so attractive from a first-person persepective. It is more difficult to accept that one's own mind is simply, as it were, a useful metaphor...

Linespace

Social conventions were at the heart of the rather different approach adopted by H.P.Grice, who identified two kinds of meaning: natural and non-natural. Natural meaning is closely related to the ordinary ability to see the implications of objects and circumstances. If spots break out on someone's face, we may be able to diagnose measles and foresee the course of the disease: in other words the spots mean measles. Non-natural meaning is the kind posessed by speech and books, where someone intends the meaning. Grice pointed out that if the audience recognises the intention to communicate 'A', that very recognition means that 'A' has in fact been communicated. If I recognise that you mean to let me know that the monkey is on the table, I now have, by virtue of that recognition, the information that the monkey is on the table; so your communicative intention has been successful simply through being recognised.

Grice himself was not particularly seeking a fundamental analysis of intentionality: he was more concerned with discovering the implicit rules which make effective conversation possible; we are, for example, entitled to assume that people do not include irrelevant information in what they say, nor leave out something important. But there does seem to be an interesting insight here. 

Moral Responsibility

home

There is an uneasy relationship between responsibility and causation. You are responsible for some event only in cases where you caused it to happen and could have done otherwise. But in principle we know that the laws of cause and effect mean that, strictly speaking, there are no cases in which you could have done otherwise: your behaviour was ultimately determined by the laws of physics just as much as anything else. Here again we are in danger of being drawn out into the depths of philosophy: this problem is clearly that old chestnut, free will versus determinism.

The linkage between consciousness and morality has not received an enormous amount of attention: Dennett alone appears to have felt that his views about consciousness needed to be rounded out with some consideration of free will and morality. In general, the issues have been treated almost as though consciousness did not necessarily imply personhood. It surely does, but if so, attempts to produce a conscious robot or computer program are attempts to transfer moral responsibility from the engineers or the programmers to their creation. If a way could be found to do this, it would surely have enormous practical implications. The ability to have certain things done without being morally responsible for them sounds both convenient and paradoxical.

Linespace

Broadly speaking, one's views on consciousness are likely to run parallel with one's views on free will. Very briefly, there are three broad lines one can take on free will: that it is, in fact, an illusion: that it is real but mysteriously at odds with physics; or that the two are compatible. Those who believe that the self and/or consciouness are illusory would most naturally find themselves sceptics about free will; those who believe in the straightforward reality of free will would be inclined to see a deep mystery (or at least, something outside the scope of current physics) in consciousness. If we are going to say that free will and physics are compatible, on the other hand, we must accept that in some strict sense all events are predetermined, and find another sense in which we 'could have done otherwise', and we might choose to appeal to consciousness as the explanation. We might, for example, say that we 'could have done otherwise' in cases where our action was determined by conscious thought, rather than being dictated by instinct or by someone else holding a gun to our head.

Linespace

Although moral questions have not been one of the main elements in most discussions of consciousness to date, it does seem that uneasiness about the potential moral implications helps to motivate some of the opposition to computational and deterministic theories. The more we succeed in explaining people's behaviour as due to the mechanical operation of biochemical systems, the more it seems we move away from ideas of personal responsibility and (in particular) punishment. Without these ideas, it may be felt, the orderly structure of society, and the basis for self-belief and self improvement, are threatened.

Relevance

home
Relevance bears on consciousness in two related forms: the so-called ‘frame problem’ and the pragmatic problem of interpreting normal human speech or writing, which often leaves the recipient the task of drawing many essential inferences. 

The original frame problem as conceived by McCarthy and Hayes was a problem in artificial intelligence. In order to deal with a particular situation, a robot needed a stock of knowledge about it. It also needed to be able to update that knowledge as the environment changed. This proved to require a larger amount of data storage than might have been expected: for one thing it turned out that the robot needed facts about what hadn’t changed as well as facts about what had. When dealing with phone calls, for example, it needed a specific statement that if a man has a phone and a directory and then looks up a number, he still has the phone (actually something that can by no means be taken for granted if our own experience is anything to go by). Generating a ‘no-change’ statement for every object in the environment every time something changed rapidly became a huge task.

One reason for the problem was that the only means available to a robot for drawing conclusions from existing data was formal logic: propositional and predicate calculus and whatever formal extensions the designer or programmer might come up with. In a system based on classical logic like this, every proposition is given one of two values - true or false, and contradictory propositions cannot be allowed into your database. According to McCarthy, non-monotonic logics, which don't force everyhting to be absolutely true or absolutely false, and which can tolerate a degree of inconsistency in the database, are likely to provide the solution to the frame problem. They certainly seem a little more like normal thought processes than propositional calculus (which, as a matter of fact, doesn't seem at all like ordinary thought processes), but coming up with a non-monotonic system which does the job is not a simple matter. 

For the time being, anyway, there is no perfect solution to the classic problem, but within the AI field it is not considered insuperable. There are strategies for dealing with it (build into your system an assumption that nothing changes, for example) and though none is perfect, it is generally viewed as just something to bear in mind and work round.  

Linespace

However, philosophers (Dennett in particular, whose essay 'Cognitive Wheels: the Frame Problem of AI' is a classic.) see much wider implications, and a wider form of the problem. This appears to be a very serious issue for any form of AI designed to cope with real-world situations, and it raises profound and difficult questions about how the human mind works.

It is generally accepted that in normal thought processes we draw on an immense background of common-sense knowledge about the world. When I make a sandwich, I don't worry about whether the bread will be too heavy to pick up, or whether the butter will fail to stick to it. There is in principle an unlimited amount of this kind of information which is accessible to me; and the issue of how I manage to store it all, or generate it from a more manageably-sized set of information, is a substantial problem in itself. But even if we assume I can hold this vast encyclopaedia in my head, how on earth do I manage to pick out just those pieces of information which need to inform my actions in a particular context? The vast majority of it seems obviously irrelevant, but in particular circumstances it may suddenly become pertinent. When I move my knife in buttering the bread, I alter its spatial co-ordinates in relation to other objects. A vast set of facts about distances thereby changes (in fact there are clearly an infinite number of such facts). Virtually all of these facts are supremely uninteresting, but if one of them happened to be that the distance between the knife and my eye, or the electric socket, or just the priceless vase recklessly left by the bread-board, was now zero, it would be highly relevant. This seems so obvious to us - we pick out relevance so automatically - that it is quite difficult to see the problem. Why not just pay attention just where a danger is less than five centimetres away, say, and ignore all the other distance facts? But how am I going to know which distances are less than five centimetres without thinking briefly about every one of them first (never mind the issue of defining in advance what is, or might become, a danger). It's tempting to think that the problem is strictly non-computable, but it seems doubtful whether it can even be formulated rigorously enough to allow that to be proved.

Linespace

Our effortless capacity to pick out just the relevant facts to consider before acting is matched by a similar capacity to pick out the facts which bear on how we interpret a sentence. H.P.Grice considered various kinds of dialogue in which much of the relevant information is omitted. If I remark that there's no milk left, and a colleague tells me I shouldn't worry because they're going to the post office in a few minutes, I easily pick up the implied offer to buy some milk. Grice pointed out that  in interpreting such sentences, I actually rely on the assumption that people try to include just the right kind and amount of relevant information in their utterances. I am therefore entitled to assume that the post office remark has something to do with the milk issue, and am able to pick the most likely reading. Grice's work has given rise to much further study of similar examples in the field of pragmatics, notably by Sperber and Wilson, who reject the idea that meanings are simply encoded in utterances, and propose a principle of relevance instead. They suggest that relevance is proportional to contextual effects: from our point of view, however, this is a clarification rather than a solution.  

Linespace

So that, in my view, sums up the three and a half problems of consciousness. Many different answers have been proposed, some of the most noteworthy of which are covered on this site. Judge for yourself which, if any of them, is correct.

Linespace