Success with Consciousness

What would success look like, when it comes to the question of consciousness?

Of course it depends which of the many intersecting tribes who dispute or share the territory you belong to. Robot builders and AI designers have known since Turing that their goal is a machine whose responses cannot be distinguished from those of a human being. There’s a lot wrong with the Turing Test, but I still think it’s true that if we had a humanoid robot that could walk and talk and interact like a human being in a wide range of circumstances, most people wouldn’t question whether it was conscious or not. We’d like a theory to go with our robot, but the main thing is whether it works. Even if we knew it worked in ways that were totally unlike biological brains, it wouldn’t matter – planes don’t fly the birds do, but so what, it’s still flying. Of course we’re a million miles from such a perfectly human robot, but we sort of know where we’re going.

It’s a little harder for neurologists; they can’t rely quite so heavily on a practical demonstration, and reverse engineering consciousness is tough. Still, there are some feats that could be pulled off that would pretty much suggest the neurologists have got it. If we could reliably read off from some scanner the contents of anyone’s mind, and better yet, insert thoughts and images at will, it would be hard to deny that the veil of mystery had been drawn back quite a distance. It would have to be a general purpose scanner, though; one that worked straight away for all thoughts in any person’s brain. People have already demonstrated that they can record a pattern from one subject’s brain when that subject is thinking a known thought, and then, in the same session with the same subject, recognise that same pattern as a sign of the same thought.  That is a much lesser achievement, and I’m not sure it gets you a cigar, let alone the Nobel prize.

What about the poor philosophers? They have no way to mount a practical demonstration, and in fact no such demonstration can save them from their difficulties. The perfectly human robot does not settle things for them; they tell it they can see it appears to be able to perform a range of ‘easy’ cognitive tasks, but whether it really knows anything at all about what it’s doing is another matter. They doubt whether it really has subjective experience, even though it assures them that it’s own introspective evidence says it does. The engineer sitting with them points out that some of the philosophers probably doubt whether he has subjective experience.

“Oh, we do,” they admit, “in fact many of us are pretty sure we don’t have it ourselves. But somehow that doesn’t seem to make it any easier to wrap things up.”

Nor are the philosophers silenced by the neurologists’ scanner, which reveals that an apparently comatose patient is in fact fully aware and thinking of Christmas. The neurologists wake up the subject, who readily confirms that their report is exactly correct. But how do they know, ask the philosophers; you could be recording an analogue of experience which gets tipped into memory only at the point of waking, or your scanner could be conditioning memory directly without any actual experience. The subject could be having zomboid dreams, which convey neural data, but no actual experience.

“No, they really couldn’t,” protest the neurologists, but in vain.

So where do philosophers look for satisfaction? Of course, the best thing of all is to know the correct answer. But you can only believe that you know. If knowledge requires you to know that you know, you’re plummeting into an infinite regress; if knowing requires appropriate justification then you’re into a worm-can opening session about justification of which there is no end. Anyway, even the most self-sufficient of us would like others to agree, if not recognise the brilliance of our solution.

Unfortunately you cannot make people agree with you about philosophy. Physicists can set off a bomb to end the argument about whether e really equals mc squared; the best philosophers can do is derive melancholy satisfaction from the belief that in fifty years someone will probably be quoting their arguments as common sense, though they will not remember who invented them, or that anyone did. Some people will happen to agree with you already of course, which is nice, but your arguments will convert no-one; not only can you not get people to accept your case; you probably can’t even get them to read your paper. I sympathised recently with a tweet from Keith Frankish lamenting how he has to endlessly revisit bits of argument against his theory of illusionism, one’s he’s dealt with many times before (oh, but illusions require consciousness; oh, if it’s an illusion, who’s being deceived…). That must indeed be frustrating, but to be honest it’s probably worse than that; how many people, having had the counter-arguments laid out yet again, accept them or remember them accurately? The task resembles that of Sisyphus, whose punishment in Hades was to roll a boulder up a hill it invariably rolled down again. Camus told us we must imagine Sisyphus happy, but that itself is a mental task which I find undoes itself every time I stop concentrating…

I suppose you could say that if you have to bring out your counter-arguments regularly, that itself is some indicator of having achieved some recognition. Let’s be honest, attention is what everyone wants; moral philosophers all want a mention on The Good Place, and I suppose philosophers of mind would all want to be namechecked on Westworld if Julian Jaynes hadn’t unaccountably got that one sewn up.

Since no-one is going to agree with you, except that sterling band who reached similar conclusions independently, perhaps the best thing is to get your name associated with a colourful thought experiment that lots of people want to refute. Perhaps that’s why the subject of consciousness is so full of them, from the Chinese Room to Mary the Colour Scientist, and so on. Your name gets repeated and cited that way, although there is a slight danger that it ends up being connected forever with a point of view you have since moved on from, as I believe is the case with Frank Jackson himself, who no longer endorses the knowledge argument exemplified by the Mary story.

Honestly, though, being the author of a widely contested idea is second best to being the author of a universally accepted one. There’s a Borges story about a deposed prince thrown into a cell where all he can see is a caged jaguar. Gradually he realises that the secrets of the cosmos are encoded in the jaguar’s spots, which he learns to read; eventually he knows the words of magic which would cast down his rival’s palace and restore him to power; but in learning these secrets he has attained enlightenment and no longer cares about earthly matters. I bet every philosopher who reads this story feels a mild regret; yes, of course enlightenment is great, but if only my insights allowed me to throw down a couple of palaces? That bomb thing really kicked serious ass for the physicists; if I could make something go bang, I can’t help feeling people would be a little more attentive to my corpus of work on synthetic neo-dualism…

Actually, the philosophers are not the most hopeless tribe; arguably the novelists are also engaged in a long investigation of consciousness; but those people love the mystery and don’t even pretend to want a solution. I think they really enjoy making things more complicated and even see a kind of liberation in the indefinite exploration; what can you say for people like that!

The Philosophy of Delirium

Is there any philosophy of delirium? I remember asserting breezily in the past that there was philosophy of everything – including the actual philosophy of everything and the philosophy of philosophy. But when asked recently, I couldn’t come up with anything specifically on delirium, which in a way is surprising, given that it is an interesting mental state.

Hume, I gather, described two diseases of philosophy, characterised by either despair or unrealistic optimism in the face of the special difficulties a philosopher faces. The negative over-reaction he characterised as melancholy, the positive as delirium, in its euphoric sense. But that is not what we are after.

Historically I think that if delirium came up in discussion at all, it was bracketed with other delusional states, hallucinations and errors. Those, of course, have an abundant literature going back many centuries. The possibility of error in our perceptions has been responsible for the persistent (but surely erroneous) view that we never perceive reality, only sense-data, or only our idea of reality, or only a cognitive model of reality. The search for certainty in the face of the constant possibility of error motivated Descartes and arguably most of epistemology.

Clinically, delirium is an organically caused state of confusion. Philosophically, I suggest we should seize on another feature, namely that it can involve derangement of both perception and cognition. Let’s use the special power of fiat used by philosophers to create new races of zombies, generate second earths, and enslave the population of China, and say that philosophical delirium is defined exactly as that particular conjunction of derangements. So we can then define three distinct kinds of mental disturbance. First, delusion, where our thinking mind is working fine but has bizarre perceptions presented to it. Second, madness, where our perceptions are fine, but our mental responses make no sense. Third, delirium, in which distorted perceptions meet with distorted cognition.

The question then is; can delirium, so defined, actually be distinguished from delusion and madness? Suppose we have a subject who persistently tries to eat their hat. One reading is that the subject perceives the Homburg as a hamburger.  The second reading is that they perceive the hat correctly, but think it is appropriate to eat hats. The delirious reading might be that they see the hat as a shoe and believe shoes are to be eaten. For any possible set of behaviour it seems different readings will achieve consistency with any of the three possible states.

That’s from a third person point of view, of course, but surely the subject knows which state applies? They can’t reliably tell us, because their utterances are open to multiple interpretations too, but inwardly they know, don’t they? Well, no. The deluded person thinks the world really is bizarre; the mad one is presumably unaware of the madness, and the delirious subject is barred from knowing the true position on both counts. Does it then, make any sense to uphold the existence of any real distinction? Might we not better say that the three possibilities are really no more than rival diagnostic strategies, which may or may not work better in different cases, but have no absolute validity?

Can we perhaps fall back on consistency? Someone with delusions may see a convincing oasis out in the desert, but if a moment later it becomes a mountain, rational faculties will allow them to notice that something is amiss, and hypothesise that their sensory inputs are unreliable. However, a subject of Cartesian calibre would have to consider the possibility that they are actually just mistaken in their beliefs about their own experiences; in fact it always seemed to be a mountain. So once again the distinctions fall away.

Delusion and madness are all very well in their way, but delirium has a unique appeal in that it could be invisible. Suppose my perceptions are all subject to a consistent but complex form of distortion; but my responses have an exquisitely apposite complementary twist, which means that the two sets of errors cancel out and my actual behaviour and everything that I say, come out pretty much like those of some tediously sane and normal character. I am as delirious as can be, but you’d never know. Would I know? My mental states are so addled and my grip on reality so contorted, it hardly seems I could know anything; but if you question me about what I’m thinking, my responses all sound perfectly fine, just like those of my twin who doesn’t have invisible delirium.

We might be tempted to say that invisible delirium is no delirium; my thoughts are determined by the functioning of my cognitive processes, and since those end up working fine, it makes no sense to believe in some inner place where things go all wrong for a while.

But what if I get super invisible delirium? In this wonderful syndrome, my inputs and outputs are mangled in complementary ways again, but by great good fortune the garbled version actually works faster and better than normal. Far from seeming confused, I now seem to understand stuff better and more deeply than before. After all, isn’t reaching this kind of state why people spend time meditating and doing drugs?

But perhaps I am falling prey to the euphoric condition diagnosed by Hume…

PhiMiFi

If there’s one thing philosophers of mind like more than an argument, it’s a rattling good yarn. Obviously we think of Mary the Colour Scientist, Zombie Twin (and Zimboes, Zomboids, Zoombinis…) , the Chinese Room (and the Chinese Nation), Brain in a Vat, Swamp-Man, Chip-Head, Twin Earth and Schmorses… even papers whose content doesn’t include narratives at this celebrated level often feature thought-experiments that are strange and piquant. Obviously philosophy in general goes in for that kind of thing too – just think of the trolley problems that have been around forever but became inexplicably popular in the last year or so (I was probably force-fed too many at an impressionable age, and now I can’t face them – it’s like broccoli, really): but I don’t think there’s another field that loves a story quite like the Mind guys.

I’ve often alluded to the way novelists have been attacking the problems of minds by other means ever since the James Boys (Henry and William) set up their pincer movement on the stream of consciousness; and how serious novelists have from time to time turned their hand to exploring the theme consciousness with clear reference to academic philosophy, sometimes even turning aside to debunk a thought experiment here and there. We remember philosophically  considerable works of genuine science fiction such as  Scott Bakker’s Neuropath. We haven’t forgotten how Ian  and Sebastian Faulks in their different ways made important contributions to the field of Bogus but Totally Convincing Psychology with De Clérambault’s Syndrome and Glockner’s Isthmus, nor David Lodge’s book ‘Consciousness and the Novel’ and his novel Thinks. And philosophers have not been averse to writing the odd story, from Dan Lloyd’s novel Radiant Cool up to short stories by many other academics including Dennett and Eric Schwitzgebel.

So I was pleased to hear (via a tweet from Eric himself) of the inception of an unexpected new project in the form of the Journal of Science Fiction and Philosophy. The Journal ‘aims to foster the appreciation of science fiction as a medium for philosophical reflection’.   Does that work? Don’t science fiction and philosophy have significantly different objectives? I think it would be hard to argue that all science fiction is of philosophical interest (other than to the extent that everything is of philosophical interest). Some space opera and a disappointing amount of time travel narrative really just consists of adventure stories for which the SF premise is mere background. Some science fiction (less than one might expect) is actually about speculative science. But there is quite a lot that could almost as well be called Phifi as Scifi, stories where the alleged science is thinly or unconvincingly sketched, and simply plays the role of enabler for an examination of social, ethical, or metaphysical premises. You could argue that Asimov’s celebrated robot short stories fit into this category; we have no idea how positronic brains are supposed to work, it’s the ethical dilemmas that drive the stories.

There is, then, a bit of an overlap; but surely SF and philosophy differ radically in their aims? Fiction aims only to entertain; the ideas can be rubbish so long as they enable the monsters or, slightly better, boggle the mind, can’t they? Philosophy uses stories only as part of making a definite case for the truth of particular positions, part of an overall investigative effort directed, however indirect the route, at the real world? There’s some truth in that, but the line of demarcation is not sharp. For one thing, successful philosophers write entertainingly; I do not think either Dennett or Searle would have achieved recognition for their arguments so easily if they hadn’t been presented in prose clear enough for non-academic readers to  understand, and well-crafted enough to make them enjoy the experience.  Moreover, philosophy doesn’t have to present the truth; it can ask questions or just try to do some of that  mind boggling. Myself when I come to read a philosophical paper I do not expect to find the truth (I gave up that kind of optimism along with the broccoli): my hopes are amply fulfilled if what I read is interesting. Equally, while fiction may indeed consist of amusing lies, novelists are not indifferent to the truth, and often want to advance a hypothesis, or at least, have us entertain one.

I really think some gifted novelist should take the themes of the famous thought-experiments and attempt to turn them into a coherent story. Meantime. there is every prospect that the new journal represents, not dumbing down but wising up, and I for one welcome our new peer-reviewers.

A Third Wave?

waveAn article in the Chronicle of Higher Education (via the always-excellent Mind Hacks) argues cogently that as a new torrent of data about the brain looms, we need to ensure that it is balanced by a corresponding development in theory. That must surely be right: but I wonder whether the torrent of new information is going to bring about another change in paradigm, as the advent of computers in the twentieth century surely did?

We have mentioned before the two giant projects which aim to map and even simulate the neural structure of the brain, one in America, one in Europe. Other projects elsewhere and steady advances in technology seem to indicate that the progress of empirical neuroscience, already impressive, is likely to accelerate massively in coming years.

The paper points out that at present, in spite of enormous advances, we still know relatively little about the varied types of neurons and what they do; and much of what we think we do know is vague, tentative, and possibly misleading. Soon, however, ‘there will be exabytes (billions of gigabytes) of data, detailing what vast numbers of neurons do, in real time’.

The authors rightly suggest that data alone is no good without theoretical insights: they fear that at present there may be structural issues which lead to pure experimental work being funded while theory, in spite of being cheaper, is neglected or has to tag along as best it can. The study of the mind is an exceptionally interdisciplinary business, and they justifiably say research needs to welcome ‘mathematicians, engineers, computer scientists, cognitive psychologists, and anthropologists into the fold’. No philosophers in the list, I notice, although the authors quote Ned Block approvingly. (Certainly no novelists, although if we’re studying consciousness the greatest corpus of exploratory material is arguably in literature rather than science. Perhaps that’s asking a bit too much at this stage: grants are not going to be given to allow neurologists to read Henry as well as William James, amusing though that might be.)

I wonder if we’re about to see a big sea change; a Third Wave? There’s no doubt in my mind that the arrival of practical computers in the twentieth century had a vast intellectual impact. Until then philosophy of mind had not paid all that much attention to consciousness. Free Will, of course, had been debated for centuries, and personal identity was also a regular topic; but consciousness per se and qualia in particular did not seem to be that important until – I think – the seventies or eighties when a wide range of people began to have actual experience of computers. Locke was perhaps the first person to set out a version of the inverted spectrum argument, in which the blue in your mind is the same as the yellow in mine, and vice versa; but far from its being a key issue he mentions it only to dismiss it: we all call the same real world colours by the same names, so it’s a matter of no importance. Qualia? Of no philosophical interest.

I think the thing is that until computers actually appeared it was easy to assume, like Leibniz, that they could only be like mills: turning wheels, moving parts, nothing there that resembles a mind. When people could actually see a computer producing its results, they realised that there was actually the same kind of incomprehensible spookiness about it as there was in the case of human thought; maybe not exactly the same mystery, but a pseudo-magic quality far above the readily-comprehensible functioning of a mill. As a result, human thought no longer looked so unique and we needed something to stand in as the criterion which separated machines from people. Our concept of consciousness got reshaped and promoted to play that role, and a Second Wave of thought about the mind rolled in, making qualia and anything else that seemed uniquely human of special concern.

That wave included another change, though, more subtle but very important. In the past, the answer to questions about the mind had clearly been a matter of philosophy, or psychology; at any rate an academic issue. We were looking for a heavy tome containing a theory. Once computers came along, it turned out that we might be looking for a robot instead. The issues became a matter of technology, not pure theory. The unexpected result was that new issues revealed themselves and came to the fore. The descriptive theories of the past were all very well, but now we realised that if we wanted to make a conscious machine, they didn’t offer much help. A good example appears in Dan Dennett’s paper on cognitive wheels, which sets out a version of the Frame Problem. Dennett describes the problem, and then points out that although it is a problem for robots, it’s just as mysterious for human cognition; actually a deep problem about the human mind which had never been discussed; it’s just that until we tried to build robots we never noticed it. Most philosophical theories still have this quality, I’m afraid, even Dennett’s: OK, so I’m here with my soldering iron or my keyboard: how do I make a machine that adopts the intentional stance? No clue.

For the last sixty years or so I should say that the project of artificial intelligence has set the agenda and provided new illumination in this kind of way. Now it may be that neurology is at last about to inherit the throne.  If so, what new transformations can we expect? First I would think that the old-fashioned computational robots are likely to fall back further and that simulations, probably using neural network approaches, are likely to come to the fore. Grand Union theories, which provide coherent accounts from genetics through neurology to behaviour, are going to become more common, and build a bridgehead for evolutionary theories to make more of an impact on ideas about consciousness.  However, a lot of things we thought we knew about neurons are going to turn out to be wrong, and there will be new things we never spotted that will change the way we think about the brain. I would place a small bet that the idea of the connectome will look dusty and irrelevant within a few years, and that it will turn out that neurons don’t work quite the way we thought.

Above all though, the tide will surely turn for consciousness. Since about 1950 the game has been about showing what, if anything, was different about human beings; why they were not just machines (or why they were), and what was unique about human consciousness. In the coming decade I think it will all be about how consciousness is really the same as many other mental processes. Consciousness may begin to seem less important, or at any rate it may increasingly be seen as on a continuuum with the brain activity of other animals; really just a special case of the perfectly normal faculty of…  Well, I don’t actually know what, but I look forward to finding out.

What do you mean?

Picture: pyramid of wisdom. Robots.net reports an interesting plea (pdf download) for clarity by Emanuel Diamant at the the 3rd Israeli Conference on Robotics. Robotics, he says, has been derailed for the last fifty years by the lack of a clear definition of basic concepts: there are more than 130 definitions of data, and more than 75 definitions of intelligence.

I wouldn’t have thought serious robotics had been going for much more than fifty years (though of course there are automata and other precursors which go much further back), so that sounds pretty serious: but he’s clearly right that there is a bad problem, not just for robotics but for consciousness and cognitive science, and not just for data, information, knowledge, intelligence, understanding and so on, but for many other key concepts, notably including ‘consciousness’.

It could be that this has something to do with the clash of cultures in this highly interdisciplinary area.  Scientists are relatively well-disciplined about terminology, deferring to established norms, reaching consensus and even establishing taxonomical authorities. I don’t think this is because they are inherently self-effacing or obedient; I would guess instead that this culture arises from two factors: first, the presence of irrefutable empirical evidence establishes good habits of recognising unwelcome truth gracefully; second, a lot of modern scientific research tends to be a collaborative enterprise where a degree of consensus is essential to progress.

How very different things are in the lawless frontier territory of philosophy, where no conventions are universally accepted, and discrediting an opponent’s terminology is often easier and no less prestigious than tackling the arguments. Numerous popular tactics seem designed to throw the terminology into confusion.  A philosopher may often, for instance, grab some existing words  – ethics/morality, consciousness/awareness, information/data, or whatever – and use them to embody a particular distinction while blithely ignoring the fact that in another part of the forest another philosopher is using the same words for a completely different distinction. When irreconcilable differences come to light a popular move is ‘giving’ the disputed word away:”Alright, then, you can just have ‘free will’ and make it what you like: I’m going to talk about ‘x-free will’ instead in future. I’ll define ‘x-free will’ to my own satisfaction and when I’ve expounded my theory on that basis I’ll put in a little paragraph pointing out that ‘x-free will’ is the only kind worth worrying about, or the only kind everyone in the real world is actually talking about”.  These and other tactics lead to a position where in some areas it’s generally necessary to learn a new set of terms for every paper: to have others picking up your definitions and using them in their papers, as happens with Ned Block’s p- and a-consciousness, for example, is a rare and high honour.

It’s not that philosophers are quarrelsome and egotistical (though of course they are);  it’s more that the subject matter rarely provides any scope for pinning down an irrefutable position, and is best tackled by single brains operating alone (Churchlands notwithstanding).

Diamant is particularly exercised by problems over ‘data’ , ‘information’, ‘knowledge’, and ‘intelligence’.  Why can’t we sort these out? He correctly identifies a key problem: some of these terms properly involve semantics, and the others don’t (needless to say, it isn’t clearly agreed which words fall into which camp).  What he perhaps doesn’t realise clearly enough is that the essential nature of semantics is an extremely difficult problem which has so far proved unamenable to science.  We can recognise semantics quite readily, and we know well enough the sort of thing semantics does; but exactly how it does those things remains a cloudy matter, stuck in the philosophical badlands.

If my analysis is right, the only real hope of clarification would be if we could come up with some empirical research (perhaps neurological, perhaps not) which would allow us to define semantics (or x-semantics at any rate), in concrete terms that could somehow be demonstrated in a lab. That isn’t going to happen any time soon, or possibly ever.

Diamant wants to press on however, and inevitably by doing so in the absence of science he falls into philosophy: he offers us implicitly a theory of his own and – guess what? Another new way of using the terminology. The theory he puts forward is that semantics is a matter of convention between entities. Conventions are certainly important: the meaning of particular words or symbols is generally a matter of convention; but that doesn’t seem to capture the essence of the thing. If semantics were simply a matter of convention, then before God created Adam he could have had no semantics, and could not have gone around asking for light; on the other hand, if we wanted a robot to deal with semantics, all we’d need to do would be to agree a convention with it or perhaps let it in on the prevailing conventions. I don’t know how you’d do that with a robot which had no semantics to begin with, as it wouldn’t be able to understand what you were talking about.

There are, of course, many established philosophical attempts to clarify the intentional basis of semantics. In my personal view the best starting point is H.P. Grice’s theory of natural meaning (those black clouds mean rain); although I think it’s advantageous to use a slightly different terminology…