Archive for March, 2015

knight 1This is the first of four posts about key ideas from my book The Shadow of Consciousness. We start with the so-called Easy Problem, about how the human mind does its problem-solving, organism-guiding thing. If robots have Artificial Intelligence, we might call this the problem of Natural Intelligence.

I suggest that the real difficulty here is with what I call inexhaustible problems – a family of issues which includes non-computability, but goes much wider. For the moment all I aim to do is establish that this is a meaningful group of problems and just suggest what the answer might be.

It’s one of the ironies of the artificial intelligence project that Alan Turing both raised the flag for the charge and also set up one of the most serious obstacles. He declared that by the end of the twentieth century we should be able to speak of machines thinking without expecting to be contradicted; but he had already established, in his solution to the Halting Problem, that certain questions are unanswerable by the Universal Turing Machine and hence by the computers that approximate it. The human mind, though, is able to deal with these problems: so he seemed to have identified a wide gulf separating the human and computational performances he thought would come to be indistinguishable.

Turing himself said it was, in effect, merely an article of faith that the human mind did not ultimately, in respect of some problems, suffer the same kind of limitations as a computer; no-one had offered to prove it.

Non-computability, at any rate, was found to arise for a large set of problems; another classic example being the Tiling Problem. This relates to sets of tiles whose edges match, or fail to match, rather like dominoes. We can imagine that the tiles are square, with each edge a different colour, and that the rule is that wherever two edges meet, they must be the same colour. Certain sets of tiles will fit together in such a way that they will tile the plane: cover an infinite flat surface: others won’t – after a while it becomes impossible to place another tile that matches. The problem is to determine whether any given set will tile the plane or not. This turns out unexpectedly to be a problem computers cannot answer. For certain sets of tiles, an algorithmic approach works fine; those that fail to tile the plain quite rapidly, and those that do so by forming repeating patterns like wallpaper. The fly in the ointment is that some elegant sets of tiles will cover the plane indefinitely, but only in a non-repeating, aperiodic way; when confronted with these, computational processes run on forever, unable to establish that the pattern will never begin repeating. Human beings, by resorting to other kinds of reasoning, can determine that these sets do indeed tile the plane.

Roger Penrose, who designed some examples of these aperiodic sets of tiles, also took up the implicit challenge thrown down by Turing, by attempting to prove that human thought is not affected by the limitations of computation. Penrose offered a proof that human mathematicians are not using a knowably sound algorithm to reach their conclusions. He did this by providing a cunningly self-referential proposition stated in an arbitrary formal algebraic system; it can be shown that the proposition cannot be proved within the system, but it is also the case that human beings can see that in fact it must be true. Since all computers are running formal systems, they must be affected by this limitation, whereas human beings could perform the same extra-systemic reasoning whatever formal system was being used – so they cannot be affected in the same way.

Besides the fact that the human mind is not restricted to a formal system, Penrose established that it out-performs the machine by looking at meanings; the proposition in his proof is seen to be true because of what it says, not because of its formal syntactical properties.

Why is it that machines fail on these challenges, and how? In all these cases of non-computability the problem is that the machines start on processes which continue forever. The Turing Machine never halts, the tiling patterns never stop getting bigger – and indeed, in Penrose’s proof the list of potential proofs which has to examined is similarly infinite. I think this rigorous kind of non-computability provides the sharpest, hardest-edged examples of a wider and vaguer family of problems arising from inexhaustibility.

A notable example of inexhaustibility in the wider sense is the Frame Problem, or at least its broader, philosophical version. In Dennett’s classic exposition, a robot fails to notice an important fact; the trolley that carries its spare battery also bears a bomb. Pulling out the trolley has fatal consequences. The second version of the robot looks for things that might interfere with its safely regaining the battery, but is paralysed by the attempt to consider every logically possible deduction about the consequences of moving the trolley. A third robot is designed to identify only relevant events, but is equally paralysed by the task of considering the relevance of every possible deduction.

This problem is not so sharply defined as the Halting Problem or the Tiling Problem, but I think it’s clear that there is some resemblance; here again computation fails when faced with an inexhaustible range of items. Combinatorial explosion is often invoked in these cases – the idea that when you begin looking at permutations of elements the number of combinations rises exponentially, too rapidly to cope with: that’s not wrong, but I think the difficulty is deeper and arises earlier. Never mind combinations: even the initial range of possible elements for the AI to consider is already indefinably large.

Inexhaustible problems are not confined to AI. I think another example is Quine’s indeterminacy of translation. Quine considered the challenge of interpreting an unknown language by relating the words used to the circumstances in which they were uttered. Roughly speaking, if the word “rabbit” is used exactly when a rabbit is visible, that’s what it must mean; and through a series of observations we can learn the whole language. Unfortunately, it turns out that there is always an endless series of other things which the speaker might mean. Common sense easily rejects most of them – who on earth would talk about “sets of undetached rabbit parts”? – but what is the rigorous method that explains and justifies the conclusions that common sense reaches so easily? I said this was not an AI problem, but in a way it feels like one; arguably Quine was looking for the kind of method that could be turned into an algorithm.

In this case, we have another clue to what is going on with inexhaustible problems, albeit one which itself leads to a further problem. Quine assumed that the understanding of language was essentially a matter of decoding; we take the symbols and decode the meaning, process the meaning and recode the result into a new set of symbols. We know now that it doesn’t really work like that: human language rests very heavily on something quite different; the pragmatic reading of implicatures. We are able to understand other people because we assume they are telling us what is most relevant, and that grounds all kinds of conclusions which cannot be decoded from their words alone.

A final example of inexhaustibility requires us to tread in the footsteps of giants; David Hume, the Man Who Woke Kant, discovered a fundamental problem with cause and effect. How can we tell that A causes B? B consistently follows A, but so what? Things often follow other things for a while and then stop. The law of induction allows us to conclude that if B is regularly followed by A, we can conclude that it will go on doing so. But what justifies the law of induction? After all, many potential inductions are obviously false. Until quite recently a reasonable induction told us that Presidents of the United States were always white men.

Dennett pointed out that, although they are not the same, the Frame Problem and Hume’s problem have a similar feel. They appear quite insoluble, yet ordinary human thought deals with them so easily it’s sometimes hard to believe the problems are real. It’s hard to escape the conclusion that the human mind has a faculty which deals with inexhaustible problems by some non-computational means. Over and over again we find that the human approach to these problems depends on a grasp of relevance or meaning; no algorithmic approach to either has been found.

So I think we need to recognise that this wider class of inexhaustible problem exists and has some common features. Common features suggest there might be a common solution, but what is it? Cutting to the chase, I think that in essence the special human faculty which lets us handle these problems so readily is simply recognition. Recognition is the ability to respond to entities in the world, and the ability to recognise larger entities as well as smaller ones within them opens the way to ‘seeing where things are going’ in a way that lets us deal with inexhaustible problems.

As I suggested recently, recognition is necessarily non-algorithmic. To apply rules, we need to have in mind the entities to which the rules apply. Unless these are just given, they have to be recognised. If recognition itself worked on the basis of rules, it would require us to identify a lower set of entities first – which again, could only be done by recognition, and so on indefinitely.

In our intellectual tradition, an informal basis like this feels unsatisfying, because we want proofs; we want something like Euclid, or like an Aristotelian syllogism. Hume took it that cause and effect could only be justified by either induction or deduction; what he really needed was recognition: recognition of the underlying entity of which both cause and effect are part. When we see that B is the result of A, we are really recognising that B is A a little later and transformed according to the laws of nature. Indeed, I’d argue that sometimes there is no transformation: the table sitting quietly over there is the cause of its own existence a few moments later.

As a matter of fact I claim that while induction relies directly on recognising underlying entities, even logical deduction is actually dependent on seeing the essential identity, under the laws of logic, of two propositions.

Maybe you’re provisionally willing to entertain the idea that recognition might work as a basis  for induction, sort of.  But how, you ask, does recognition deal with all the other problems? I said that inexhaustible problems call for mastery of meaning and relevance: how does recognition account for those? I’ll try to answer that in part 2.

knightIt had to happen eventually. I decided it was time I nailed my colours to the mast and said what I actually think about consciousness in book form: and here it is (, The Shadow of Consciousness (A Little Less Wrong) has two unusual merits for a book about consciousness: it does not pretend to give the absolute final answer about everything; and more remarkable than that, it features no pictures at all of glowing brains.

Actually it falls into three parts (only metaphorically – this is a sturdy paperback product or a sound Kindle ebook, depending on your choice). The first is a quick and idiosyncratic review of the history of the subject. I begin with consciousness seen as the property of things that move without being pushed (an elegant definition and by no means the worst) and well, after that it gets a bit more complicated.

The underlying theme here is how the question itself has changed over time, and crucially become less a matter of intellectual justifications and more a matter of practical blueprints for robots. The robots are generally misconceived, and may never really work – but the change of perspective has opened up the issues in ways that may be really helpful.

The second part describes and solves the Easy Problem. No, come on. What it really does is look at the unforeseen obstacles that have blocked the path to AI and to a proper understanding of consciousness. I suggest that a series of different, difficult problems are all in the end members of a group, all of which arise out of the inexhaustibility of real-world situations. The hard core of this group is the classical non-computability established for certain problems by Turing, but the Frame Problem, Quine’s indeterminacy of translation, the problem of relevance, and even Hume’s issues with induction, all turn out to be about the inexhaustible complexity of the real world.

I suggest that the brain uses the pre-formal, anomic (rule-free) faculty of recognition to deal with these problems, and that that in turn is founded on two special tools; a pointing ability which we can relate to HP Grice’s concept of natural meaning, and a doubly ambiguous approach to pattern matching which is highlighted by Edelman’s analogy with the immune system.

The third part of the book tackles the Hard Problem. It flails around for quite a while, failing to make much sense of qualia, and finally suggests that in fact there is only one quale; that is, that the special vividness and particularity of real experience which is attributed to qualia is in fact simply down to the haecceity – the ‘thisness’ of real experience. In the classic qualia arguments, I suggest, we miss this partly because we fail to draw the correct distinction between existence and subsistence (honestly the point is not as esoteric as it sounds).

Along the way I draw some conclusions about causality and induction and how our clerkish academic outlook may have led us astray now and then.

Not many theories have rated more than a couple of posts on Conscious Entities, but I must say I’ve rather impressed myself with my own perspicacity, so I’m going to post separately about four of the key ideas in the book, alternating with posts about other stuff. The four ideas are inexhaustibility, pointing, haecceity and reality. Then I promise we can go back to normal.

I’ll close by quoting from the acknowledgements…

… it would be poor-spirited of me indeed not to tip my hat to the regulars at Conscious Entities, my blog, who encouraged and puzzled me in very helpful ways.

Thanks, chaps. Not one of you, I think, will really agree with what I’m saying, and that’s exactly as it should be.

AGIAn interesting piece in Aeon by David Deutsch. There was a shorter version in the Guardian, but it just goes to show how even reasonably intelligent editing can mess up a piece. There were several bits in the Guardian version where I was thinking to myself: ooh, he’s missed the point a bit there, he doesn’t really get that: but on reading the full version I found those very points were ones he actually understood very well. In fact he talks a lot of sense and has some real insights.

Not that everything is perfect. Deutsch quite reasonably says that AGI, artificial general intelligence, machines that think like people, must surely be possible. We could establish that by merely pointing out that if the brain does it, then it seems natural that a machine must be able to do it: but Deutsch invokes the universality of computation, something he says he proved in the 1980s. I can’t claim to understand all this in great detail, but I think what he proved was the universality in principle of quantum computation: but the notion of computation used was avowedly broader than Turing computation. So it’s odd that he goes on to credit Babbage with discovering the idea, as a conjecture, and Turing with fully understanding it. He says of Turing:

He concluded that a computer program whose repertoire included all the distinctive attributes of the human brain — feelings, free will, consciousness and all — could be written.

That seems too sweeping to me: it’s not unlikely that Turing did believe those things, but they go far beyond his rather cautious published claims, something we were sort of talking about last time.

I’m not sure I fully grasp what people mean when they talk about the universality of computation. It seems to be that they mean any given physical state of affairs can be adequately reproduced, or at any rate emulated to any required degree of fidelity, by computational processes. This is probably true: what it perhaps overlooks is that for many commonplace entities there is no satisfactory physical description. I’m not talking about esoteric items here: think of a vehicle, or to be Wittgensteinian, a game. Being able to specify things in fine detail, down to the last atom, is simply no use in either case. There’s no set of descriptions of atom placement that defines all possible vehicles (virtually anything can be a vehicle) and certainly none for all possible games, which given the fogginess of the idea, could easily correspond with any physical state of affairs. These items are defined on a different level of description, in particular one where purposes and meanings exist and are relevant.  So unless I’ve misunderstood, the claimed universality is not as universal as we might have thought.

However, Deutsch goes on to suggest, and quite rightly, I think, that what programmed AIs currently lack is a capacity for creative thought. Endowing them with this, he thinks, will require a philosophical breakthrough. At the moment he believes we still tend to believe that new insights come from induction; whereas ever since Hume there has been a problem over induction, and no-one knows how to write an algorithm which can produce genuine and reliable new inductions.

Deutsch unexpectedly believes that Popperian epistemology has the solution, but has been overlooked. Popper, of course, took the view that scientific method was not about proving a theory but about failing to disprove one: so long as your hypotheses withstood all attempts to prove them false (and so long as they were not cast in cheating ways that made them unfalsifiable) you were entitled to hang on to them.

Maybe this helps to defer the reckoning so far as induction is concerned: it sort of kicks the can down the road indefinitely. The problem, I think, is that the Popperian still has to be able to identify which hypotheses to adopt in the first place; there’s a very large if not infinite choice of possible ones for any given set of circumstances.

I think the answer is recognition: I think recognition is the basic faculty underlying nearly all of human thought. We just recognise that certain inductions, and certain events are that might be cases of cause and effect are sound examples: and our creative thought is very largely powered by recognising aspects of the world we hadn’t spotted before.

The snag is, in my view, that recognition is unformalisable and anomic – lacking in rules. I have a kind of proof of this. In order to apply rules, we have to be able to identify the entities to which the rules should be applied. This identification is a matter of recognising the entities. But recognition cannot itself be based on rules, because that would then require us to identify the entities to which those rules applied – and we’d be caught in a in a vicious circle.

It seems to follow that if no rules can be given for recognition, no algorithm can be constructed either, and so one of the basic elements of thought is just not susceptible to computation. Whether quantum computation is better at this sort of thing than Turing computation is a question I’m not competent to judge, but I’d be surprised if the idea of rule-free algorithms could be shown to make sense for any conception of computation.

So that might be why AGI has not come along very quickly. Deutsch may be right that we need a philosophical breakthrough, although one has to have doubts about whether the philosophers look likely to supply it: perhaps it might be one of those things where the practicalities come first and then the high theory is gradually constructed after the fact. At any rate Deutsch’s piece is a very interesting one, and I think many of his points are good. Perhaps if there were a book-length version I’d find that I actually agree with him completely…

oldhardThe Hard Problem may indeed be hard, but it ain’t new:

Twenty years ago, however, an instant myth was born: a myth about a dramatic resurgence of interest in the topic of consciousness in philosophy, in the mid-1990s, after long neglect.

So says Galen Strawson in the TLS: philosophers have been talking about consciousness for centuries. Most of what he says, including his main specific point, is true, and the potted history of the subject he includes is good, picking up many interesting and sensible older views that are normally overlooked (most of them overlooked by me, to be honest). If you took all the papers he mentioned and published them together, I think you’d probably have a pretty good book about consciousness. But he fails to consider two very significant factors and rather over-emphasises the continuity of discussion in philosophy and psychology, leaving a misleading impression.

First, yes, it’s absolutely a myth that consciousness came back to the fore in philosophy only in the mid-1990s, and that Francis Crick’s book The Astonishing Hypothesis was important in bringing that about. The allegedly astonishing hypothesis, identifying mind and brain, had indeed been a staple of philosophical discussion for centuries.  We can also agree that consciousness really did go out of fashion at one stage: Strawson grants that the behaviourists excluded consciousness from consideration, and that as a result there really was an era when it went through a kind of eclipse.

He rather underplays that, though, in two ways. First, he describes it as merely a methodological issue. It’s true that the original behaviourists stopped just short of denying the reality of consciousness, but they didn’t merely say ‘let’s approach consciousness via a study of measurable behaviour’, they excluded all reference to consciousness from psychology, an exclusion that was meant to be permanent. Second, the leading behaviourists were just the banner bearers for a much wider climate of opinion that clearly regarded consciousness as bunk, not just a non-ideal methodological approach. Interestingly, it looks to me as if Alan Turing was pretty much of this mind. Strawson says:

But when Turing suggests a test for when it would be permissible to describe machines as thinking, he explicitly puts aside the question of consciousness.

Actually Turing barely mentions consciousness; what he says is…

The original question, “Can machines think?” I believe to be too meaningless to deserve discussion.

The question of consciousness must be at least equally meaningless in his eyes. Turing here sounds very like a behaviourist to me.

What he does represent is the appearance of an entirely new element in the discussion. Strawson represents the history as a kind of debate within psychology and philosophy: it may have been like that at one stage: a relatively civilised dialogue between the elder subject and its offspring. They’d had a bit of a bust-up when psychology ran away from home to become a science, but they were broadly friends now, recognising each other’s prerogatives, and with a lot of common heritage. But in 1950, with Turing’s paper, a new loutish figure elbowed its way onto the table: no roots in the classics, no long academic heritage, not even really a science: Artificial Intelligence. But the new arrival seized the older disciplines by the throat and shook them until their teeth rattled, threatening to take the whole topic away from them wholesale.  This seminal, transformational development doesn’t feature in Strawson’s account at all. His version makes it seem as if the bitchy tea-party of philosophy continued undisturbed, while in fact after the rough intervention of AI, psychology’s muscular cousin neurology pitched in and something like a saloon bar brawl ensued, with lots of disciplines throwing in the odd punch and even the novelists and playwrights hitching up their skirts from time to time and breaking a bottle over somebody’s head.

The other large factor he doesn’t discuss is the religious doctrine of the soul. For most of the centuries of discussion he rightly identifies, one’s permitted views about the mind and identity were set out in clear terms by authorities who in the last resort would burn you alive. That has an effect. Descartes is often criticised for being a dualist; we have no particular reason to think he wasn’t sincere, but we ought to recognise that being anything else could have got him arrested. Strawson notes that Hobbes got away with being a materialist and Hume with saying things that strongly suggested atheism; but they were exceptions, both in the more tolerant (or at any rate more disorderly) religious environment of Britain.

So although Strawson’s specific point is right, there really was a substantial sea change: earlier and more complex, but no less worthy of attention.

In those long centuries of philosophy, consciousness may have got the occasional mention, but the discussion was essentially about thought, or the mind. When Locke mentioned the inverted spectrum argument, he treated it only as a secondary issue, and the essence of his point was that the puzzle which was to become the Hard Problem was nugatory, of no interest or importance in itself.

Consciousness per se took centre stage only when religious influence waned and science moved in. For the structuralists like Wundt it was central, but the collapse of the structuralist project led directly to the long night of behaviourism we have already mentioned. Consciousness came back into the centre gradually during the second half of the twentieth century, but this time instead of being the main object of attention it was pressed into service as the last defence against AI; the final thing that computers couldn’t do. Whereas Wundt had stressed the scientific measurement of consciousness its unmeasurability was now the very thing that made it interesting. This meant a rather different way of looking at it, and the gradual emergence of qualia for the first time as the real issue. Strawson is quite right of course that this didn’t happen in the mid-nineties; rather, David Chalmers’ formulation cemented and clarified a new outlook which had already been growing in influence for several decades.

So although the Hard Problem isn’t new, it did become radically more important and central during the latter part of the last century; and as yet the sherriff still ain’t showed up.

NCCsThis editorial piece notes that we still haven’t nailed down the neural correlates of consciousness (NCCs). It’s part of a Research Topic collection on the subject, and it mentions three candidates featured in the papers which have been well-favoured but now – arguably at any rate – seem to have been found wanting. This old but still useful paper by David Chalmers lists several more of the old contenders. Though naturally a little downbeat, the editorial piece addresses some of the problems and recommends a fresh assault. However, if we haven’t succeeded after twenty-five or thirty years of trying, perhaps common sense suggests that there might be something fundamentally wrong with the project?

There must be neural correlates of consciousness, though, mustn’t there? Unless we’re dualists, and perhaps even if we are, it seems hard to imagine that mental events are not matched by events in the brain. We have by now a wealth of evidence that stimulating parts of the brain can generate conscious experiences artificially, and we’ve always known that damage to the brain damages the mind; sometimes in exquisitely particular ways. So what could be wrong with the basic premise that there are neural correlates of consciousness?

First, consciousness could itself be a mixed bag of different things, not one consistent phenomenon. Conscious states, after all, include such things as being visually aware of a red light; rehearsing a speech mentally; meditating; and waiting for the starting pistol. These things are different in themselves and it’s not particularly likely that their neuronal counterparts will resemble each other.

Then it could be realised in multiple ways. Even if we confine ourselves to one kind of consciousness, there’s no guarantee that the brain always does it the same way. If we assume for the sake of argument that consciousness arises from a neuronal function, then perhaps several different processes will do, just as a bucket, a hose, a fountain and a sewer all serve the function of moving water.

Third, it could well be that consciousness arises, not from any property of the neurons doing the thinking, but from the context they do it in. If the higher order theorists were right, to take one example, for a set of neurons to be conscious would require that another set of neurons was directed at them – so that there was a thought about the thought But whether another set of neurons is executing a function about our first set of neurons is not an observable property of the first set of neurons. As another example it might be that theories of embodiment are true in a strong sense, implying that consciousness depends on context external to the brain altogether.

Fourth, consciousness might depend on finely detailed properties that require very complex decoding. Suppose we have a library and we want to find out which books in it mention libraries; we have to read them to find out. In a somewhat similar way we might have to read the neurons in our brain in detail to find out whether they were supporting consciousness.

Quite apart from these problems of principle, of course, we might reasonably have some reservations about the technology. Even the best scanners have their limitations, typically showing us proxies for the general level of activity in a broad area rather than pinpointing the activity of particular neurons; and it isn’t feasible or ethical to fill a subject’s brain with electrodes. With the equipment we had twenty-five years ago, it was staggeringly ambitious to think we could crack the problem, but even now we might not really be ready.

All that suggests that the whole idea of Neural Correlates of Consciousness is framed in a way which makes it unpromising or completely misconceived. And yet… understanding consciousness, for most people, is really a matter of building a bridge between the physical and the mental; even if we’re not out to reduce the mental to the physical, we want to see, as it were, diplomatic relations established between the two. How could that bridge ever be built without some work on the physical side, and how could that work not be, in part at least, about tracking neuronal activity? If we’re not going to succumb to mystery or magic, we just have to keep looking, don’t we?

I think there are probably two morals to be drawn. The first is that while we have to keep looking for neural correlates of consciousness in some sense (even if we don’t describe the porject that way), it was probably always a little naive to look for the correlates, the single simple things that would infallibly diagnose the presence of consciousness. It was always a bit unlikely, at any rate, that something as simple as oscillating together at 40 Hertz just was consciousness; surely it’s was always going to be a lot more complicated than that?

Second, we probably do need a bit more of a theory, or at least a hypothesis. There’s no need to be unduly narrow-minded about our scientific method; sometimes even random exploration can lead to significant insights just as well as carefully constructed testing of well-defined hypotheses. But the neuronal activity of the brain is often, and quite rightly, described as the most complex phenomenon in the known universe. Without any theoretical insight into how we think neuronal activity might be giving rise to consciousness, we really don’t have much chance of seeing what we’re after unless it just happens by great good fortune to be blindingly obvious. Just having a bit of a look to see if we can spot things that reliably occur when consciousness is present is probably underestimating the task. Indeed, that is sort of the theme of the collection; Beyond the Simple Contrastive Approach. To put it crudely, if you’re looking for something, it helps to have an idea of what the thing you’re looking for looks like.

In another 25 or 30 years, will we still be looking? Or will we have given up in despair? Nil Desperandum!