Posts tagged ‘consciousness’

knightIt had to happen eventually. I decided it was time I nailed my colours to the mast and said what I actually think about consciousness in book form: and here it is (amazon.com, amazon.co.uk). The Shadow of Consciousness (A Little Less Wrong) has two unusual merits for a book about consciousness: it does not pretend to give the absolute final answer about everything; and more remarkable than that, it features no pictures at all of glowing brains.

Actually it falls into three parts (only metaphorically – this is a sturdy paperback product or a sound Kindle ebook, depending on your choice). The first is a quick and idiosyncratic review of the history of the subject. I begin with consciousness seen as the property of things that move without being pushed (an elegant definition and by no means the worst) and well, after that it gets a bit more complicated.

The underlying theme here is how the question itself has changed over time, and crucially become less a matter of intellectual justifications and more a matter of practical blueprints for robots. The robots are generally misconceived, and may never really work – but the change of perspective has opened up the issues in ways that may be really helpful.

The second part describes and solves the Easy Problem. No, come on. What it really does is look at the unforeseen obstacles that have blocked the path to AI and to a proper understanding of consciousness. I suggest that a series of different, difficult problems are all in the end members of a group, all of which arise out of the inexhaustibility of real-world situations. The hard core of this group is the classical non-computability established for certain problems by Turing, but the Frame Problem, Quine’s indeterminacy of translation, the problem of relevance, and even Hume’s issues with induction, all turn out to be about the inexhaustible complexity of the real world.

I suggest that the brain uses the pre-formal, anomic (rule-free) faculty of recognition to deal with these problems, and that that in turn is founded on two special tools; a pointing ability which we can relate to HP Grice’s concept of natural meaning, and a doubly ambiguous approach to pattern matching which is highlighted by Edelman’s analogy with the immune system.

The third part of the book tackles the Hard Problem. It flails around for quite a while, failing to make much sense of qualia, and finally suggests that in fact there is only one quale; that is, that the special vividness and particularity of real experience which is attributed to qualia is in fact simply down to the haecceity – the ‘thisness’ of real experience. In the classic qualia arguments, I suggest, we miss this partly because we fail to draw the correct distinction between existence and subsistence (honestly the point is not as esoteric as it sounds).

Along the way I draw some conclusions about causality and induction and how our clerkish academic outlook may have led us astray now and then.

Not many theories have rated more than a couple of posts on Conscious Entities, but I must say I’ve rather impressed myself with my own perspicacity, so I’m going to post separately about four of the key ideas in the book, alternating with posts about other stuff. The four ideas are inexhaustibility, pointing, haecceity and reality. Then I promise we can go back to normal.

I’ll close by quoting from the acknowledgements…

… it would be poor-spirited of me indeed not to tip my hat to the regulars at Conscious Entities, my blog, who encouraged and puzzled me in very helpful ways.

Thanks, chaps. Not one of you, I think, will really agree with what I’m saying, and that’s exactly as it should be.

AGIAn interesting piece in Aeon by David Deutsch. There was a shorter version in the Guardian, but it just goes to show how even reasonably intelligent editing can mess up a piece. There were several bits in the Guardian version where I was thinking to myself: ooh, he’s missed the point a bit there, he doesn’t really get that: but on reading the full version I found those very points were ones he actually understood very well. In fact he talks a lot of sense and has some real insights.

Not that everything is perfect. Deutsch quite reasonably says that AGI, artificial general intelligence, machines that think like people, must surely be possible. We could establish that by merely pointing out that if the brain does it, then it seems natural that a machine must be able to do it: but Deutsch invokes the universality of computation, something he says he proved in the 1980s. I can’t claim to understand all this in great detail, but I think what he proved was the universality in principle of quantum computation: but the notion of computation used was avowedly broader than Turing computation. So it’s odd that he goes on to credit Babbage with discovering the idea, as a conjecture, and Turing with fully understanding it. He says of Turing:

He concluded that a computer program whose repertoire included all the distinctive attributes of the human brain — feelings, free will, consciousness and all — could be written.

That seems too sweeping to me: it’s not unlikely that Turing did believe those things, but they go far beyond his rather cautious published claims, something we were sort of talking about last time.

I’m not sure I fully grasp what people mean when they talk about the universality of computation. It seems to be that they mean any given physical state of affairs can be adequately reproduced, or at any rate emulated to any required degree of fidelity, by computational processes. This is probably true: what it perhaps overlooks is that for many commonplace entities there is no satisfactory physical description. I’m not talking about esoteric items here: think of a vehicle, or to be Wittgensteinian, a game. Being able to specify things in fine detail, down to the last atom, is simply no use in either case. There’s no set of descriptions of atom placement that defines all possible vehicles (virtually anything can be a vehicle) and certainly none for all possible games, which given the fogginess of the idea, could easily correspond with any physical state of affairs. These items are defined on a different level of description, in particular one where purposes and meanings exist and are relevant.  So unless I’ve misunderstood, the claimed universality is not as universal as we might have thought.

However, Deutsch goes on to suggest, and quite rightly, I think, that what programmed AIs currently lack is a capacity for creative thought. Endowing them with this, he thinks, will require a philosophical breakthrough. At the moment he believes we still tend to believe that new insights come from induction; whereas ever since Hume there has been a problem over induction, and no-one knows how to write an algorithm which can produce genuine and reliable new inductions.

Deutsch unexpectedly believes that Popperian epistemology has the solution, but has been overlooked. Popper, of course, took the view that scientific method was not about proving a theory but about failing to disprove one: so long as your hypotheses withstood all attempts to prove them false (and so long as they were not cast in cheating ways that made them unfalsifiable) you were entitled to hang on to them.

Maybe this helps to defer the reckoning so far as induction is concerned: it sort of kicks the can down the road indefinitely. The problem, I think, is that the Popperian still has to be able to identify which hypotheses to adopt in the first place; there’s a very large if not infinite choice of possible ones for any given set of circumstances.

I think the answer is recognition: I think recognition is the basic faculty underlying nearly all of human thought. We just recognise that certain inductions, and certain events are that might be cases of cause and effect are sound examples: and our creative thought is very largely powered by recognising aspects of the world we hadn’t spotted before.

The snag is, in my view, that recognition is unformalisable and anomic – lacking in rules. I have a kind of proof of this. In order to apply rules, we have to be able to identify the entities to which the rules should be applied. This identification is a matter of recognising the entities. But recognition cannot itself be based on rules, because that would then require us to identify the entities to which those rules applied – and we’d be caught in a in a vicious circle.

It seems to follow that if no rules can be given for recognition, no algorithm can be constructed either, and so one of the basic elements of thought is just not susceptible to computation. Whether quantum computation is better at this sort of thing than Turing computation is a question I’m not competent to judge, but I’d be surprised if the idea of rule-free algorithms could be shown to make sense for any conception of computation.

So that might be why AGI has not come along very quickly. Deutsch may be right that we need a philosophical breakthrough, although one has to have doubts about whether the philosophers look likely to supply it: perhaps it might be one of those things where the practicalities come first and then the high theory is gradually constructed after the fact. At any rate Deutsch’s piece is a very interesting one, and I think many of his points are good. Perhaps if there were a book-length version I’d find that I actually agree with him completely…

oldhardThe Hard Problem may indeed be hard, but it ain’t new:

Twenty years ago, however, an instant myth was born: a myth about a dramatic resurgence of interest in the topic of consciousness in philosophy, in the mid-1990s, after long neglect.

So says Galen Strawson in the TLS: philosophers have been talking about consciousness for centuries. Most of what he says, including his main specific point, is true, and the potted history of the subject he includes is good, picking up many interesting and sensible older views that are normally overlooked (most of them overlooked by me, to be honest). If you took all the papers he mentioned and published them together, I think you’d probably have a pretty good book about consciousness. But he fails to consider two very significant factors and rather over-emphasises the continuity of discussion in philosophy and psychology, leaving a misleading impression.

First, yes, it’s absolutely a myth that consciousness came back to the fore in philosophy only in the mid-1990s, and that Francis Crick’s book The Astonishing Hypothesis was important in bringing that about. The allegedly astonishing hypothesis, identifying mind and brain, had indeed been a staple of philosophical discussion for centuries.  We can also agree that consciousness really did go out of fashion at one stage: Strawson grants that the behaviourists excluded consciousness from consideration, and that as a result there really was an era when it went through a kind of eclipse.

He rather underplays that, though, in two ways. First, he describes it as merely a methodological issue. It’s true that the original behaviourists stopped just short of denying the reality of consciousness, but they didn’t merely say ‘let’s approach consciousness via a study of measurable behaviour’, they excluded all reference to consciousness from psychology, an exclusion that was meant to be permanent. Second, the leading behaviourists were just the banner bearers for a much wider climate of opinion that clearly regarded consciousness as bunk, not just a non-ideal methodological approach. Interestingly, it looks to me as if Alan Turing was pretty much of this mind. Strawson says:

But when Turing suggests a test for when it would be permissible to describe machines as thinking, he explicitly puts aside the question of consciousness.

Actually Turing barely mentions consciousness; what he says is…

The original question, “Can machines think?” I believe to be too meaningless to deserve discussion.

The question of consciousness must be at least equally meaningless in his eyes. Turing here sounds very like a behaviourist to me.

What he does represent is the appearance of an entirely new element in the discussion. Strawson represents the history as a kind of debate within psychology and philosophy: it may have been like that at one stage: a relatively civilised dialogue between the elder subject and its offspring. They’d had a bit of a bust-up when psychology ran away from home to become a science, but they were broadly friends now, recognising each other’s prerogatives, and with a lot of common heritage. But in 1950, with Turing’s paper, a new loutish figure elbowed its way onto the table: no roots in the classics, no long academic heritage, not even really a science: Artificial Intelligence. But the new arrival seized the older disciplines by the throat and shook them until their teeth rattled, threatening to take the whole topic away from them wholesale.  This seminal, transformational development doesn’t feature in Strawson’s account at all. His version makes it seem as if the bitchy tea-party of philosophy continued undisturbed, while in fact after the rough intervention of AI, psychology’s muscular cousin neurology pitched in and something like a saloon bar brawl ensued, with lots of disciplines throwing in the odd punch and even the novelists and playwrights hitching up their skirts from time to time and breaking a bottle over somebody’s head.

The other large factor he doesn’t discuss is the religious doctrine of the soul. For most of the centuries of discussion he rightly identifies, one’s permitted views about the mind and identity were set out in clear terms by authorities who in the last resort would burn you alive. That has an effect. Descartes is often criticised for being a dualist; we have no particular reason to think he wasn’t sincere, but we ought to recognise that being anything else could have got him arrested. Strawson notes that Hobbes got away with being a materialist and Hume with saying things that strongly suggested atheism; but they were exceptions, both in the more tolerant (or at any rate more disorderly) religious environment of Britain.

So although Strawson’s specific point is right, there really was a substantial sea change: earlier and more complex, but no less worthy of attention.

In those long centuries of philosophy, consciousness may have got the occasional mention, but the discussion was essentially about thought, or the mind. When Locke mentioned the inverted spectrum argument, he treated it only as a secondary issue, and the essence of his point was that the puzzle which was to become the Hard Problem was nugatory, of no interest or importance in itself.

Consciousness per se took centre stage only when religious influence waned and science moved in. For the structuralists like Wundt it was central, but the collapse of the structuralist project led directly to the long night of behaviourism we have already mentioned. Consciousness came back into the centre gradually during the second half of the twentieth century, but this time instead of being the main object of attention it was pressed into service as the last defence against AI; the final thing that computers couldn’t do. Whereas Wundt had stressed the scientific measurement of consciousness its unmeasurability was now the very thing that made it interesting. This meant a rather different way of looking at it, and the gradual emergence of qualia for the first time as the real issue. Strawson is quite right of course that this didn’t happen in the mid-nineties; rather, David Chalmers’ formulation cemented and clarified a new outlook which had already been growing in influence for several decades.

So although the Hard Problem isn’t new, it did become radically more important and central during the latter part of the last century; and as yet the sherriff still ain’t showed up.

NCCsThis editorial piece notes that we still haven’t nailed down the neural correlates of consciousness (NCCs). It’s part of a Research Topic collection on the subject, and it mentions three candidates featured in the papers which have been well-favoured but now – arguably at any rate – seem to have been found wanting. This old but still useful paper by David Chalmers lists several more of the old contenders. Though naturally a little downbeat, the editorial piece addresses some of the problems and recommends a fresh assault. However, if we haven’t succeeded after twenty-five or thirty years of trying, perhaps common sense suggests that there might be something fundamentally wrong with the project?

There must be neural correlates of consciousness, though, mustn’t there? Unless we’re dualists, and perhaps even if we are, it seems hard to imagine that mental events are not matched by events in the brain. We have by now a wealth of evidence that stimulating parts of the brain can generate conscious experiences artificially, and we’ve always known that damage to the brain damages the mind; sometimes in exquisitely particular ways. So what could be wrong with the basic premise that there are neural correlates of consciousness?

First, consciousness could itself be a mixed bag of different things, not one consistent phenomenon. Conscious states, after all, include such things as being visually aware of a red light; rehearsing a speech mentally; meditating; and waiting for the starting pistol. These things are different in themselves and it’s not particularly likely that their neuronal counterparts will resemble each other.

Then it could be realised in multiple ways. Even if we confine ourselves to one kind of consciousness, there’s no guarantee that the brain always does it the same way. If we assume for the sake of argument that consciousness arises from a neuronal function, then perhaps several different processes will do, just as a bucket, a hose, a fountain and a sewer all serve the function of moving water.

Third, it could well be that consciousness arises, not from any property of the neurons doing the thinking, but from the context they do it in. If the higher order theorists were right, to take one example, for a set of neurons to be conscious would require that another set of neurons was directed at them – so that there was a thought about the thought But whether another set of neurons is executing a function about our first set of neurons is not an observable property of the first set of neurons. As another example it might be that theories of embodiment are true in a strong sense, implying that consciousness depends on context external to the brain altogether.

Fourth, consciousness might depend on finely detailed properties that require very complex decoding. Suppose we have a library and we want to find out which books in it mention libraries; we have to read them to find out. In a somewhat similar way we might have to read the neurons in our brain in detail to find out whether they were supporting consciousness.

Quite apart from these problems of principle, of course, we might reasonably have some reservations about the technology. Even the best scanners have their limitations, typically showing us proxies for the general level of activity in a broad area rather than pinpointing the activity of particular neurons; and it isn’t feasible or ethical to fill a subject’s brain with electrodes. With the equipment we had twenty-five years ago, it was staggeringly ambitious to think we could crack the problem, but even now we might not really be ready.

All that suggests that the whole idea of Neural Correlates of Consciousness is framed in a way which makes it unpromising or completely misconceived. And yet… understanding consciousness, for most people, is really a matter of building a bridge between the physical and the mental; even if we’re not out to reduce the mental to the physical, we want to see, as it were, diplomatic relations established between the two. How could that bridge ever be built without some work on the physical side, and how could that work not be, in part at least, about tracking neuronal activity? If we’re not going to succumb to mystery or magic, we just have to keep looking, don’t we?

I think there are probably two morals to be drawn. The first is that while we have to keep looking for neural correlates of consciousness in some sense (even if we don’t describe the porject that way), it was probably always a little naive to look for the correlates, the single simple things that would infallibly diagnose the presence of consciousness. It was always a bit unlikely, at any rate, that something as simple as oscillating together at 40 Hertz just was consciousness; surely it’s was always going to be a lot more complicated than that?

Second, we probably do need a bit more of a theory, or at least a hypothesis. There’s no need to be unduly narrow-minded about our scientific method; sometimes even random exploration can lead to significant insights just as well as carefully constructed testing of well-defined hypotheses. But the neuronal activity of the brain is often, and quite rightly, described as the most complex phenomenon in the known universe. Without any theoretical insight into how we think neuronal activity might be giving rise to consciousness, we really don’t have much chance of seeing what we’re after unless it just happens by great good fortune to be blindingly obvious. Just having a bit of a look to see if we can spot things that reliably occur when consciousness is present is probably underestimating the task. Indeed, that is sort of the theme of the collection; Beyond the Simple Contrastive Approach. To put it crudely, if you’re looking for something, it helps to have an idea of what the thing you’re looking for looks like.

In another 25 or 30 years, will we still be looking? Or will we have given up in despair? Nil Desperandum!

CamembertCan you change your mind after the deed is done? Ezequiel Di Paolo thinks you can, sometimes. More specifically, he believes that acts can become intentional after they have already been performed. His theory, which seems to imply a kind of time travel, is set out in a paper in the latest JCS.

I think the normal view would be that for an act to be intentional, it must have been caused by a conscious decision on your part. Since causes come before effects, the conscious decision must have happened beforehand, and any thoughts you may have afterwards are irrelevant. There is a blurry borderline over what is conscious, of course; if you were confused or inattentive, if you were ‘on autopilot’ or you were following a hunch or a whim it may not be completely clear how consciously your action was considered.

There can, moreover, be what Di Paolo calls an epistemic change. In such a case the action was always intentional in fact, but you only realise that it was when you think about your own motives more carefully after the event. Perhaps you act in the heat of the moment without reflection; but when you think about it you realise that in fact what you did was in line with your plans and actually caused by them. Although this kind of thing raises a few issues, it is not deeply problematic in the same way as a real change. Di Paolo calls the real change an ontological one; here you definitely did not intend the action beforehand, but it becomes intentional retrospectively.

That seems disastrous on the face of it. If the intentionality of an act can change once, it can presumably change again, so it seems all intentions must become provisional and unreliable; the whole concept of responsibility looks in danger of being undermined. Luckily, Di Paolo believes that changes can only occur in very particular circumstances, and in such a way that only one revision can occur.

His view founds intentions in enactment rather than in linear causation; he has them arising in social interaction. The theory draws on Husserl and Heidegger, but probably the easiest way to get a sense of it is to consider the examples presented by Di Paolo. The first is from De Jaegher and centres, in fittingly continental style, around a cheese board.

De Jaegher is slicing himself a corner of Camembert and notices that his companion is watching in a way which suggests that he too, would like to eat cheese. DJ cuts him a slice and hands it over.
“I could see you wanted some cheese,” he remarks.
“Funny thing, that,” he replies, “actually, I wasn’t wanting cheese until you handed it to me; at that moment the desire crystallised and I now found I had been wanting cheese.”

In a corner of the room, Alice is tired of the party to do; the people are boring and the magnificent cheese board is being monopolised by philosophers enacting around it. She looks across at her husband and happens to scratch her wrist. He comes over.
“Saw you point at your watch,” he says, “yeah, we probably should go now. We’ve got the Stompers’ do to go to.”
Alice now realises that although she didn’t mean to point to her watch originally, she now feels the earlier intention is in place after all – she did mean to suggest they went.

At the Stompers’ there is dancing; the tango! Alice and Bill are really good, and as they dance Bill finds that his moves are being read and interpreted by Alice superbly; she conforms and shapes to match him before he has actually decided what to do; yet she has read him correctly and he realises that after the fact his intentions really were the ones she divined. (I sort of melded the examples.)

You see how it works? No, it doesn’t really convince me either. It is a viable way of looking at things, but it doesn’t compel us to agree that there was a real change of earlier intention. Around the cheese board there may always have been prior hunger, but I don’t see why we’d say the intention existed before accepting the cheese.

It is true, of course, that human beings are very inclined to confabulate, to make up stories about themselves that make their behaviour make sense, even if that involves some retrospective monkeying with the facts. It might well be that social pressure is a particularly potent source of this kind of thing; we adjust our motivations to fit with what the people around us would like to hear. In a loose sense, perhaps we could even say that our public motives have a social existence apart from the private ones lodged in the recesses of our minds; and perhaps those social ones can be adjusted retrospectively because, to put it bluntly, they are really a species of fiction.

Otherwise I don’t see how we can get more than an epistemic change. I’ve just realised that I really kind of feel like some cheese…

Tom StoppardIt was exciting to hear that Tom Stoppard’s new play was going to be called The Hard Problem, although until it opened recently details were scarce. In the event the reviews have not been very good. It could easily have been that the pieces in the mainstream newspapers missed the point in some way; unfortunately, Vaughan Bell of Mind Hacks  didn’t like the way the intellectual issues were handled either (though he had an entertaining evening); and he’s a very sensible and well-informed commentator on consciousness and the mind. So, a disappointing late entry in a distinguished playwright’s record?

I haven’t seen it yet, but I’ve read the script, which in some ways is better for our current purposes. No-one, of course, supposed that Stoppard was going to present a live solution to the Hard Problem: but in the event the play is barely about that problem at all. The Problem’s chief role is to help Hilary, our heroine, get a job at the Krohl Institute for Brain Science, an organisation set up by the wealthy financier Jerry Krohl. Most of the Krohl’s work is on ‘hard’ neuroscience and reductive, materialist projects, but Leo, the head of the department Hilary joins, happens to think the Hard Problem is central. Merely mentioning it is enough to clinch the job, and that’s about it; the chief concern of the rest of the research we’re told about is altruism, and the Prisoner’s Dilemma.

The strange thing is that within philosophy the Hard Problem must be the most fictionalised issue ever. The wealth of thought experiments, elaborate examples and complicated counterfactuals provides enough stories to furnish the complete folklore of a small country. Mary the colour scientist, the zombies, the bats, Twin Earth, chip-head, the qualia that dance and the qualia that fade like Tolkienish elves; as an author you’d want to make something out of all that, wouldn’t you? Or perhaps that assumption just helps explain why I’m not a successful playwright. Of course, you’d only use that stuff if you really wanted to write about the Hard Problem, and Stoppard, it seems, doesn’t really. Perhaps he should just have picked a different title; Every Good Girl Deserves to Know What Became of Her Kid?

Hilary, in fact, had a daughter as a teenager who she gave up for adoption, and who she has worried about ever since. She believes in God because she needs someone effective to pray to about it; and presumably she believes in altruism so someone can be altruistic towards her daughter; though if the sceptic’s arguments are sound, self-interest would work, too.

The debate about altruism is one of those too-well-trodden paths in philosophy; more or less anything you say feels as if it has been in a thousand mouths already. I often feel there’s an element of misunderstanding between those who defend the concept of altruism and those who would reduce it to selfish genery. Yes, the way people behave tends to be consistent with their own survival and reproduction; but that hardly exhausts the topic; we want to know how the actual reasons, emotions, and social conventions work. It’s sort of as though I remarked on how extraordinary it is that a forest pumps so much water way above the ground.

“There’s no pump, Peter,” says BitBucket; “that’s kind of a naive view. See, the tree needs the water in its leaves to survive, so it has evolved as a water-having organism. There are no little hamadryads planning it all out and working tiny pumps. No water magic.”

“But there’s like, capillarity, or something, isn’t there? Um, osmosis? Xylem and phloem? Turgid vacuoles?”

“Sure, but those things are completely explained by the evolutionary imperatives. Saying there are vacuoles doesn’t tell us why there are vacuoles or why they are what they really are.”

“I don’t think osmosis is completely explained by evolution. And surely the biological pumping system is, you know, worth discussing in itself?”

“There’s no pump, Peter!”

Stoppard seems to want to say that greedy reductionism throws out the baby with the bath water. Hilary’s critique of the Prisoner’s Dilemma is that it lacks all context, all the human background that actually informs our behaviour; what’s the relationship of the two prisoners? When the plot puts her into an analogous dilemma, she sacrifices her own interests and career, and is forced to suffer the humiliation of being left with nothing to study but philosophy. In parallel the financial world that pays for the Krohl is going through convulsions because it relied on computational models which were also too reductionist; it turns out that the market thinks and feels and reacts in ways that aren’t determined by rational game theory.

That point is possibly a little undercut by the fact that a reductionist actually foresaw the crash. Amal, who lost out by not rating the Hard Problem high enough, nevertheless manages to fathom the market problem ahead of time…

The market is acting stupid, and the models are out of whack because we don’t know how to build a stupid computer.

But perhaps we are to suppose that he’s learnt his lesson and is ready to talk turgid vacuoles with us sloppy thinkers.

I certainly plan to go and see a performance, and if new light dawns as a result, I’ll let you know.

 

datingWhy can’t we solve the problem of consciousness? That is the question asked by a recent Guardian piece.  The account given there is not bad at all; excellent by journalistic standards, although I think it probably overstates the significance of Francis Crick’s intervention.  His book was well worth reading, but in spite of the title his hypothesis had ceased to be astonishing quite a while before. Surely also a little odd to have Colin McGinn named only as Ted Honderich’s adversary when his own Mysterian views are so much more widely cited. Still the piece makes a good point; lots of Davids and not a few Samsons have gone up against this particular Goliath, yet the giant is still on his feet.

Well, if several decades of great minds can’t do the job, why not throw a few dozen more at it? The Edge, in its annual question this year, asks its strike force of intellectuals to tackle the question: What do you think about machines that think? This evoked no fewer than 186 responses. Some of the respondents are old hands at the consciousness game, notably Dan Dennett; we must also tip our hat to our friend Arnold Trehub, who briefly denounces the idea that artefactual machines can think. It’s certainly true, in my own opinion, that we are nowhere near thinkng machines, and in fact it’s not clear that we are getting materially closer: what we have got is splendid machines that clearly don’t think at all but are increasingly good at doing tasks we previously believed needed thought. You could argue that eliminating the need for thought was Babbage’s project right from the beginning, and we know that Turing discarded the question ‘Can machines think?’ as not worthy of an answer.

186 answers is of course, at least 185 more than we really wanted, and those are not good odds of getting even a congenial analysis. In fact, the rapid succession of views, some well-informed, others perhaps shooting from the hip to a degree, is rather exhausting: the effect is like a dreadfully prolonged session of speed dating: like my theory? No? Well don’t worry, there are 180 more on the way immediately. It is sort of fun to surf the wave of punditry, but I’d be surprised to hear that many people were still with the programme when it got to view number 186 (which, despairingly or perhaps refreshingly, is a picture).

Honestly. though, why can’t we solve the problem of consciousness? Could it be that there is something fundamentally wrong? Colin McGinn, of course, argues that we can never understand consciousness because of cognitive closure; there’s no real mystery about it, but our mental toolset just doesn’t allow us to get to the answer.  McGinn makes a good case, but I think that human cognition is not formal enough to be affected by a closure of this kind; and if it were, I think we should most likely remain blissfully unaware of it: if we were unable to understand consciousness, we shouldn’t see any problem with it either.

Perhaps, though, the whole idea of consciousness as conceived in contemporary Western thought is just wrong? It does seem to be the case that non-European schools of philosophy construe the world in ways that mean a problem of consciousness never really arises. For that matter, the ancient Greeks and Romans did not really see the problem the way we do: although ancient philosophers discussed the soul and personal identity, they didn’t really worry about consciousness. Commonly people blame Western dualism for drawing too sharp a division between the world of the mind and the world of material objects: and the finger is usually pointed at Descartes in particular. Perhaps if we stopped thinking about a physical world and a non-physical mind the alleged problem would simply evaporate. If we thought of a world constituted by pure experience, not differentiated into two worlds, everything would seem perfectly natural?

Perhaps, but it’s not a trick I can pull off myself. I’m sure it’s true our thinking on this has changed over the years, and that the advent of computers, for example, meant that consciousness, and phenomenal consciousness in particular, became more salient than before. Consciousness provided the extra thing computers hadn’t got, answering our intuitive needs and itself being somewhat reshaped to fill the role.  William James, as we know, thought the idea was already on the way out in 1904: “A mere echo, the faint rumour left behind by the disappearing ‘soul’ upon the air of philosophy”; but over a hundred years later it still stands as one of the great enigmas.

Still, maybe if we send in another 200 intellectuals…?

BISASusan Schneider’s recent paper argues that when we hear from alien civilisations, it’s almost bound to be super intelligent robots getting in touch, rather than little green men. She builds on Nick Bostrom’s much-discussed argument that we’re all living in a simulation.

Actually, Bostrom’s argument is more cautious than that, and more carefully framed. His claim is that at least one of the following propositions is true:
(1) the human species is very likely to go extinct before reaching a “posthuman” stage;
(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);
(3) we are almost certainly living in a computer simulation.

So that if we disbelieve the first two, we must accept the third.

In fact there are plenty of reasons to argue that the first two propositions are true. The first evokes ideas of nuclear catastrophe or an unexpected comet wiping us out in our prime, but equally it could just be that no post human stage is ever reached. We only know about the cultures of our own planet, but two of the longest lived – the Egyptian and the Chinese – were very stable, showing few signs of moving on towards post humanism. They made the odd technological advance, but they also let things slip: no more pyramids after the Old Kingdom; ocean-going junks abandoned before being fully exploited. Really only our current Western culture, stemming from the European Renaissance, has displayed a long run of consistent innovation; it may well be a weird anomaly and its five-hundred year momentum may well be temporary. Maybe our descendants will never go much further than we already have; maybe, thinking of Schneider’s case, the stars are basically inhabited by Ancient Egyptians who have been living comfortably for millions of years without ever discovering electricity.

The second proposition requires some very debatable assumptions, notably that consciousness is computable. But the notion of “simulation” also needs examination. Bostrom takes it that a computer simulation of consciousness is likely to be conscious, but I don’t think we’d assume a digital simulation of digestion would do actual digesting. The thing about a simulation is that by definition it leaves out certain aspects of the real phenomenon (otherwise it’s the phenomenon itself, not a simulation). Computer simulations normally leave out material reality, which could be a problem if we want real consciousness. Maybe it doesn’t matter for consciousness; Schneider argues strongly against any kind of biological requirement and it may well be that functional relations will do in the case of consciousness. There’s another issue, though; consciousness may be uniquely immune from simulation because of its strange epistemological greediness. What do I mean? Well, for a simulation of digestion we can write a list of all the entities to be dealt with – the foods we expect to enter the gut and their main components. It’s not an unmanageable task, and if we like we can leave out some items or some classes of item without thereby invalidating the simulation. Can we write a list of the possible contents of consciousness? No. I can think about any damn thing I like, including fictional and logically impossible entities. Can we work with a reduced set of mental contents? No; this ability to think about anything is of the essence.

All this gets much worse when Bostrom floats the idea that future ancestor simulations might themselves go on to be post human and run their own nested simulations, and so on. We must remember that he is really talking about simulated worlds, because his simulated ancestors need to have all the right inputs fed to them consistently. A simulated world has to be significantly smaller in information terms than the world that contains it; there isn’t going to be room within it to simulate the same world again at the same level of detail. Something has to give.

Without the indefinite nesting, though, there’s no good reason to suppose the simulated ancestors will ever outnumber the real people who ever lived in the real world. I suppose Bostrom thinks of his simulated people as taking up negligible space and running at speeds far beyond real life; but when you’re simulating everything, that starts to be questionable. The human brain may be the smallest and most economic way of doing what the human brain does.

Schneider argues that, given the same Whiggish optimism about human progress we mentioned earlier, we must assume that in due course fleshy humans will be superseded by faster and more capable silicon beings, either because robots have taken over the reins or because humans have gradually cyborgised themselves to the point where they are essentially super intelligent robots. Since these post human beings will live on for billions of years, it’s almost certain that when we make contact with aliens, that will be the kind we meet.

She is, curiously, uncertain about whether these beings will be conscious. She really means that they might be zombies, without phenomenal consciousness. I don’t really see how super intelligent beings like that could be without what Ned Block called access consciousness, the kind that allows us to solve problems, make plans, and generally think about stuff; I think Schneider would agree, although she tends to speak as though phenomenal, experiential consciousness was the only kind.

She concludes, reasonably enough, that the alien robots most likely will have full conscious experience. Moreover, because reverse engineering biological brains is probably the quick way to consciousness, she thinks that a particular kind of super intelligent AI is likely to predominate: biologically inspired superintelligent alien (BISA). She argues that although BISAs might in the end be incomprehensible, we can draw some tentative conclusions about BISA minds:
(i). Learning about the computational structure of the brain of the species that created the BISA can provide insight into the BISAs thinking patterns.
(ii) BISAs may have viewpoint invariant representations. (Surely they wouldn’t be very bright if they didn’t?)
(iii) BISAs will have language-like mental representations that are recursive and combinatorial. (Ditto.)
(iv) BISAs may have one or more global workspaces. (If you believe in global workspace theory, certainly. Why more than one, though – doesn’t that defeat the object? Global workspaces are useful because they’re global.)
(v) A BISA’s mental processing can be understood via functional decomposition.

I’ll throw in a strange one; I doubt whether BISAs would have identity, at least not the way we do. They would be computational processes in silicon: they could split, duplicate, and merge without difficulty. They could be copied exactly, so that the question of whether BISA x was the same as BISA y could become meaningless. For them, in fact, communicating and merging would differ only in degree. Something to bear in mind for that first contact, perhaps.

This is interesting stuff, but to me it’s slightly surprising to see it going on in philosophy departments; does this represent an unexpected revival of the belief that armchair reasoning can tell us important truths about the world?

rosetta stoneMicrosoft recently announced the first public beta preview for Skype Translate, a service which will provide immediate translation during voice calls. For the time being only Spanish/English is working but we’re told that English/German and other languages are on the way. The approach used is complex. Deep Neural Networks apparently play a key role in the speech recognition. While the actual translation  ultimately relies on recognising bits of text which resemble those it already knows, the same basic principle applied in existing text translators such as Google Translate, it is also capable of recognising and removing ‘disfluencies’ –  um and ers, rephrasings, and so on, and apparently makes some use of syntactical models, so there is some highly sophisticated processing going on.  It seems to do a reasonable job, though as always with this kind of thing a degree of scepticism is appropriate.

Translating actual speech, with all its messy variability is of course an amazing achievement, much more difficult than dealing with text (which itself is no walk in the park); and it’s remarkable indeed that it can be done so well without the machine making any serious attempt to deal with the meaning of the words it translates. Perhaps that’s a bit too bald: the software does take account of context and as I said it removes some meaningless bits, so arguably it is not ignoring meaning totally. But full-blown intentionality is completely absent.

This fits into a recent pattern in which barriers to AI are falling to approaches which skirt or avoid consciousness as we normally understand it, and all the intractable problems that go with it.  It’s not exactly the triumph of brute force, but it does owe more to processing power and less to ingenuity than we might have expected. At some point if this continues, we’re going to have to take seriously the possibility of our having, in the not-all-that remote future, a machine which mimics human behaviour brilliantly without our ever having solved any of the philosophical problems. Such a robot might run on something like a revival of the frames or scripts of Marvin Minsky or Roger Schank, only this time with a depth and power behind it that would make the early attempts look like working with an abacus. The AI would, at its crudest, simply be recognising situations and looking up a good response, but it would have such a gigantic library of situations and it would be so subtle at customising the details that its behaviour would be indistinguishable from that of ordinary humans for all practical purposes. What would we say about such a robot (let’s call her Sophia, why not since anthropomorphism seems inevitable). I can see several options.

Option one. Sophia really is conscious, just like us. OK, we don’t really understand how we pulled it off, but it’s futile to argue about it when her performance provides everything we could possibly demand of consciousness and passes every test anyone can devise. We don’t argue that photographs are not depictions because they’re not executed in oil paint, so why would we argue that a consciousness created by other means is not the real thing? She achieved consciousness by a different route, and her brain doesn’t work like ours – but her mind does. In fact, it turns out we probably work more like her than we thought: all this talk of real intrinsic intentionality and magic meaningfulness turns out to be a systematic delusion; we’re really just running scripts ourselves!

Option two. Sophia is conscious, but not in the way we are. OK, the results are indistinguishable, but we just know that the methods are different, and so the process is not the same. birds and bats both fly, but they don’t do it the same way. Sophia probably deserves the same moral rights and duties as us, though we need to be careful about that; but she could very well be a philosophical zombie who has no subjective experience. On the other hand, her mental life might have subjective qualities of its own, very different to ours but incommunicable.

Option three. She’s not not conscious; we just know she isn’t, because we know how she works and we know that all her responses and behaviour come from simply picking canned sequences out of the cupboard. We’re deluding ourselves if we think otherwise. But she is the vivid image of a human being and an incredibly subtle and complex entity: she may not be that different from animals whose behaviour is largely instinctive. We cannot therefore simply treat her as a machine: she probably ought to have some kinds of rights: perhaps special robot rights. Since we can’t be absolutely certain that she does not experience real pain and other feelings in some form, and since she resembles us so much, it’s right to avoid cruelty both on the grounds of the precautionary principle and so as not to risk debasing our own moral instincts; if we got used to doling out bad treatment to robots who cried out with human voices, we might get used to doing it to flesh and blood people too.

Option four.  Sophia’s just an entertaining machine, not conscious at all; but that moral stuff is rubbish. It’s perfectly OK to treat her like a slave, to turn her off when we want, or put her through terrible ‘ordeals’ if it helps or amuses us. We know that inside her head the lights are off, no-one home: we might as well worry about dolls. You talk about debasing our moral instincts, but I don’t think treating puppets like people is a great way to go, morally. You surely wouldn’t switch trolleys to save even ten Sophias if it killed one human being: follow that out to its logical conclusion.

Option five. Sophia is a ghastly parody of human life and should be destroyed immediately. I’m not saying she’s actuated by demonic possession (although Satan is pretty resourceful), but she tempts us into diabolical errors about the unique nature of the human spirit.

No doubt there are other options; for me. at any rate, being obliged to choose one is a nightmare scenario. Merry Christmas!

Locke with flowersThe problem of qualia is in itself a very old one, but it is expressed in new terms.  My impression is that the actual word ‘qualia’ only began to be widely used (as a hot new concept) in the 1970s.  The question of whether the colours you experience in your mind are the same as the ones I experience in mine, on the other hand, goes back a long way. I’m not aware of any ancient discussions, though I should not be at all surprised to hear that there is one in, say, Sextus Empiricus (if you know one please mention it): I think the first serious philosophical exposition of the issue is Locke’s in the Essay Concerning Human Understanding:

“Neither would it carry any imputation of falsehood to our simple ideas, if by the different structure of our organs, it were so ordered, that the same object should produce in several men’s minds different ideas at the same time; e.g. If the idea, that a violet produced in one man’s mind by his eyes, were the same that a marigold produces in another man’s, and vice versa. For since this could never be known: because one man’s mind could not pass into another man’s body, to perceive, what appearances were produced by those organs; neither the ideas hereby, nor the names, would be at all confounded, or any falsehood be in either. For all things, that had the texture of a violet, producing constantly the idea, which he called blue, and those that had the texture of a marigold, producing constantly the idea, which he as constantly called yellow, whatever those appearances were in his mind; he would be able as regularly to distinguish things for his use by those appearances, and understand, and signify those distinctions, marked by the names blue and yellow, as if the appearances, or ideas in his mind, received from those two flowers, were exactly the same, with the ideas in other men’s minds.”

Interestingly, Locke chose colours which are (near enough) opposites on the spectrum; this inverted spectrum form of the case has been highly popular in recent decades.  It’s remarkable that Locke put the problem in this sophisticated form; he managed to leap to a twentieth-century outlook from a standing start, in a way. It’s also surprising that he got in so early: he was, after all, writing less than twenty years after the idea of the spectrum was first put forward by Isaac Newton. It’s not surprising that Locke should know about the spectrum; he was an enthusiastic supporter of Newton’s ideas, and somewhat distressed by his own inability to follow them in the original. Newton, no courter of popularity, deliberately expressed his theories in terms that were hard for the layman, and scientifically speaking, that’s what Locke was. Alas, it seems the gap between science and philosophy was already apparent even before science had properly achieved a separate existence: Newton would still have called himself a natural philosopher, I think, not a scientist.

It’s hard to be completely sure that Locke did deliberately pick colours that were opposite on the spectrum – he doesn’t say so, or call attention to their opposition (there might even be some room for debate about whether  ‘blue’ and ‘yellow are really opposite) but it does seem at least that he felt that strongly contrasting colours provided  a good example, and in that respect at least he anticipated many future discussions. The reason so many modern  theorists like the idea is that they believe a switch of non-opposite colour qualia would be detectable, because the spectrum would no longer be coherent, while inverting the whole thing preserves all the relationships intact and so leaves the change undetectable. Myself, I think this argument is a mistake, inadvertently transferring to qualia the spectral structure which actually belongs to the objective counterparts of colour qualia. The qualia themselves have to be completely indistinguishable, so it doesn’t matter whether we replace yellow qualia with violet or orange ones, or for that matter, with the quale of the smell of violets.

Strangely enough though Locke was not really interested in the problem; on the contrary, he set it out only because he was seeking to dismiss it as an irrelevance. His aim, in context, was to argue that simple perceptions cannot be wrong, and the possibility of inconsistent colour judgements – one person seeing blue where another saw yellow – seemed to provide a potential counter-argument which he needed to eliminate. If one person sees red where another sees green, surely at least one of them must be wrong? Locke’s strategy was to admit that different people might have different ideas for the same percept (nowadays we would probably refer to these subjective ideas of percepts as qualia), but to argue that it doesn’t matter because they will always agree about which colour is, in fact yellow, so it can’t properly be said that their ideas are wrong. Locke, we can say, was implicitly arguing that qualia are not worth worrying about, even for philosophical purposes.

This ‘so what?’ line of thought is still perfectly tenable. We could argue that two people looking at the same rose will not only agree that it is red, but also concur that they are both experiencing red qualia; so the fact that inwardly their experiences might differ is literally of no significance – obviously of no practical significance, but arguably also metaphysically nugatory. I don’t know of anyone who espouses this disengaged kind of scepticism, though; more normally people who think qualia don’t matter go on to argue that they don’t exist, either. Perhaps the importance we attach to the issue is a sign of how our attitudes to consciousness have changed: it was itself a matter of no great importance or interest to Locke.  I believe consciousness acquired new importance with the advent of serious computers, when it became necessary to find some quality  with which we could differentiate ourselves from machines. Subjective experience fit the bill nicely.