nostrilsSmell is the most elusive of the senses. Sight is beautifully structured and amenable to analysis in terms of consistent geometry and a coherent domain of colours. Smells… how does one smell relate to another? There just seems to be an infinite number of smells, all one of a kind. We can be completely surprised by an unprecedented smell which is like nothing we ever experienced before, in a way we can’t possibly be surprised by a new colour (with some minor possible exceptions). Our olfactory system effortlessly assigns new unique smell experiences to substances that never existed until human beings synthesised them.

There don’t even seem to be any words for smells: or at least, the only way we can talk about them is by referring to “the smell of X”, as in a “smoky smell” or “the smell of lemons”. We don’t have to do that to describe shapes or colours: they can be described as “blue”, or “square” without our having to say they are “sky-coloured” or “the shape of a box”. (Except perhaps in the case of orange? Is “orange” short for ‘the colour of oranges’?) Even for taste we have words like “bitter” and “sweet”. The only one I can think of for smells is “nidorous’, which is pretty obscure – and in order to explain it I have to fall back on saying it describes the “smell of” burning/cooking meat. All we have to describe smells is “strong” and “faint” (my daughter, reading over my shoulder, says what about “pungent”? She does not consider “pungent” to be merely a synonym of “strong” – you may be indifferent to a strong smell, but not to a pungent one, she claims).

With that by way of preamble, let me introduce the interesting question considered here by William Lycan: does smell represent? When we smell, do we smell something? There is a range of possible answers. We might say that when I smell, I smell sausages (for example). Or that I smell a smell (which happens to be the smell of sausages). Or I might say I just have a smell experience: I may know that it’s associated with sausage smells and hence with sausages, but in itself it’s just an experience.

Lycan (who believes that we smell a gaseous miasma) notes two arguments for something like the last position – that smell doesn’t represent anything. First, introspection tells us nothing about what a smell represents. If I were a member of a culture that did not make sausages or eat meat, and had never experienced them, my first nose-full of sausage odour would convey nothing to me beyond itself. It’s different for sight: we inherently see things, and when we see our first sausage there can be no doubt we are seeing a thing, even if we do not yet know much about its nature: it would be absurd to maintain we were merely having a visual experience.

The second argument is that smells can’t really be wrong: there are no smell illusions. If a car is sprayed with “new car” perfume to make us think that it is fresh off the production line, we may make a mistake about that inference, but our nose was not wrong about the smell, which was real. But representations can always be wrong, so if we can’t be wrong, there is no representation.

Lycan is unimpressed by introspective evidence: the mere fact that philosophers disagree about what it tells us is enough, he feels, to discredit it. The second argument fails because it assumes that if smells represent, they must represent their causes: but they might just represent something in the air. On getting a whiff of my first sausage I would not know what it was, but I might well be moved to say “What’s that appetising (or disgusting) smell?”  I wouldn’t simply say “Golly, I am undergoing a novel olfactory experience for some opaque reason.”  I think in fact we could go further there and argue that I might well say “What’s that I can smell?” – but that doesn’t suit Lycan’s preferred position. (My daughter intervenes to say “What about ‘acrid’?”)

Lycan summarises a range of arguments (One is an argument by Richardson that smell is phenomenologically “exteroceptive”, inherently about things out there: Lycan endorses this view, but surely relying on phenomenology is smuggling back in the introspection he was so scathing about when the other side invoked it?). His own main argument rests on the view that how something smells is something over and above all the other facts about it. The premise here is very like that in the famous thought experiment of Mary the colour scientist, though Lycan is not drawing the same conclusions at all. He claims instead that:

I can know the complex of osphresiological fact without knowing how the rose smells because knowing is knowing-under-a-representation… that solution entails that olfactory experience involves representation.

That does make some sense, I feel (What about “osphresiological”! we’re really working on the vocabulary today, aren’t we?). You may be asking yourself, however, whether this is a question that needs a single answer. Couldn’t we say, yes sometimes smells represent miasmas, but they can also represent sausages; or indeed they can represent nothing.

Lycan, in what I take to be a development of his view, is receptive to the idea of layering: that in fact smells can represent not just a cloud of stuff in the air, but also the thing from which they emanated. That being so I am not completely clear why we should give primacy to the miasma. Two contrary cases suggest themselves. First, suppose there is a odour so faint I don’t even perceive it as such consciously, but have a misty sense of salsiccian (alright, I made it up) presence which makes me begin to think about how agreeable a nice Cumberland sausage for lunch might be. Wouldn’t we say that in some sense the smell represented sausages to me: but we can’t say it represented a miasma because no such thing ever entered my mind?

Second, if we accept layering we might say that the key point is about the essential or the minimal case: we can smell without that smell representing a sausage, but what’s the least it can represent and still be a smell? Can it represent nothing? Suppose I dream and have an odd, unrecognisable experience. Later on, when awake, I encounter a Thai curd sausage for the first time and find that the experience I had was in fact an olfactory one, the smell of this particular kind of comestible. My dream experience cannot possibly have represented a sausage, a miasma, a smell, or anything but itself because I didn’t know what it was: but, it turns out, it was the smell of curd sausage.

I think your reaction to that is likely to depend on whether you think an experience could be a smell experience without being recognisable as such; if not, you may be inclined to agree with Lycan, who would probably reiterate his view that smells are sensing-under-a-representation. That view entails that there is an ineffability about smell, and Lycan suggests this might help account for the poverty of smell vocabulary that I noted above. Interestingly it turns out that this very point has been attacked by Majid and Burenhult, albeit not in a way that Lycan considers fatal to his case. Majid and Burenhult studied the Jahai, a nomadic hunter-gatherer tribe on the Malaysian peninsula, and found that they have a very rich lexicon of odour terms, such as a word for “the smell of petrol, smoke and bat droppings” (what, all of them?). It’s just us English speakers, it seems, who are stuck with acrid nidors.

Turing surprisedSo, was the brouhaha over the Turing Test  justified? It was widely reported last week that the test had been passed for the first time by a chatbot named ‘Eugene Goostman’.  I think the name itself is a little joke: it sort of means ‘well-made ghost man’.

This particular version of the test was not the regular Loebner which we have discussed before (Hugh Loebner must be grinding his teeth in frustration at the apprent ease with which Warwick garnered international media attention), but a session at the Royal Society apparently organised by Kevin Warwick.  The bar was set unusually low in this case: to succeed the chatbot only had to convince 30% of the judges that it was human. This was based on the key sentence in the paper by Turing which started the whole thing:

I believe that in about fifty years’ time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.

Fair enough, perhaps, but less impressive than the 50% demanded by other versions; and if 30% is the benchmark, this wasn’t actually the first ever pass, because other chatbots like Cleverbot have scored higher in the past. Goostman doesn’t seem to be all that special: in an “interview” on BBC radio, with only the softest of questioning, it used one response twice over, word for word.

The softness of the questioning does seem to be a crucial variable in Turing tests. If the judges stick to standard small talk and allow the chatbot to take the initiative, quite a reasonable dialogue may result: if the judges are tough it is easy to force the success rate down to zero by various tricks and traps.  Iph u spel orl ur werds rong, fr egsampl, uh bott kanot kope, but a human generally manages fine.

Perhaps that wouldn’t have worked for Goostman, though because he was presented as relatively ignorant young boy whose first language was not English, giving him some excuse for not understanding things. This stratagem attracted some criticism, but really it is of a piece with chatbot strategy in general; faking and gaming is what it’s all about. N0-one remotely supposes that Goostman, or Cleverbot, or any of the others, has actually attained consciousness, or is doing anything that could properly be called thinking. Many years ago I believe there were serious efforts to write programs that to some degree imitated the probable mental processes of a human being: they identified a topic, accessed a database of information about it, retained a set of ‘attitudes’ towards things and tried to construct utterances that made sense in relation to them.  It is a weakness of the Turing test that it does not reward that kind of effort; a robot with poor grammar and general knowledge might be readily detectable even though it gave signs of some nascent understanding, while a bot which generates smooth responses without any attempt at understanding has a much better chance of passing.

So perhaps the curtain should be drawn down on the test; not because it has been passed, but because it’s no use.

gerald edelmanGerald Edelman has died, at the age of 84. He won his Nobel prize for work on the immune system, but we’ll remember him as the author of the Theory of Neuronal Group Selection (TNGS) or ‘Neural Darwinism’.

Edelman was prominent among those who emphasise the limits of computation: he denied that the brain was a computer and did not believe computers could ever become conscious…

In considering the brain as a Turing machine, we must confront the unsettling observations that, for a  brain, the proposed table of states and state transitions is unknown, the symbols on the input tape are ambiguous and have no preassigned meanings, and the transition rules, whatever they may be, are not consistently applied. Moreover inputs and outputs are not specified by a teacher or a programmer in real-world animals. It would appear that little or noting of value can be gained from the application of this failed analogy between the computer and the brain.

He was not averse to machines in general, however, and was happy to use robots for parts of his own research. He drew a distinction between perception, first-order consciousness, and higher-order consciousness; the first could be attained by machines we could build now; the second might very well be possible for machines of the right kind eventually – but there was much to be done before we could think of trying it. Even higher-order consciousness might be attainable by an artefactual machine in principle, but the prospect was so remote it was pointless to spend any time thinking about it.

There may seem to be a slight tension here: Turing machines are ruled out, but machines of another kind are ruled in. Yet the whole point of a Universal Turing Machine is that it can do anything that any machine can do?

For Edelman the point was that the brain required biological thinking, not just concepts from physics or engineering. In particular he advocated selective mechanisms like those in Darwinian evolution. Instead of running an algorithm, the brain offered up a vast range of neuronal arrays, some of which were reinforced and so survived to exert more influence subsequently. The analogy with Darwinian evolution is not precise, and Francis Crick famously said the whole thing could better be called ‘Neural Edelmanism’ (no-one so bitchy as a couple of Nobel prize-winners).

Edelman was in fact drawing on a different analogy, one with the immune system he understood so well. The human immune system has to react quickly to invading infections, synthesising antibodies to new molecules it has never encountered before; in fact it reacts just as effectively to artificial molecules synthesised in the lab, ones that never existed in nature. For a long time it was believed that the system somehow took an impression of the invaders’ chemistry and reproduced it; in fact what it does is develop a vast repertoire of variant molecules; when one of them happens to lock into an invader it then reproduces vigorously and produces more of itself to lock into other similar molecules.

This looks like a useful concept and I think Edelman was right to think it has a role to play in the brain: but working out quite how is another matter. Edelman himself built a novel idea of recategorisation based on the action of re-entrant loops; this part of the theory has not fared very well over the years. The NYT obituary quotes Gunther Stent who once said that as professor of molecular biology and chairman of the neurobiology section of the National Academy of Sciences, he should have understood Edelman’s theory – but didn’t.

At any rate, we can see that Edelman believed that when a conscious machine was built in the distant future it would be running a selective system of some kind; one that we could well call a machine in everyday terms, though not in the Turing sense. He just might be vindicated one day.

 

self deceptionJoseph T Hallinan’s new book Kidding Ourselves says that not only is self deception more common and more powerful than we suppose, it’s actually helpful: deluded egoists beat realists every time.

Philosophically, of course, self-deception is impossible. To deceive yourself you have to induce in yourself a belief in a proposition you know to be false. In other words you have to believe and disbelieve the same thing, which is contradictory. In practice self-deception of a looser kind is possible if there is some kind of separation between the deceiving and deceived self. So for example there might be separation over time: we set up a belief for ourselves which is based on certain conditions; later on we retain the belief but have forgotten the conditions that applied. Or the separation might be between conscious and unconscious, with our unrecognised biases and preferences causing us to believe things which we could not accept if we were to subject them to a full and rational examination. As another example, we might well call it self deception if we choose to behave as if we believed something which in fact we don’t believe.

Hallinan’s examples are a bit of a mixed bag, and many of them seem to be simple delusions rather than self-delusions. He recounts, for example the strange incident in 1944 when many of the citizens of a small town in Illinois began to believe they were being attacked by a man using gas – one that most probably never existed at all. It’s a peculiar case that certainly tells us something about human suggestibility, but apparently nothing about self-deception; there’s no reason to think these people knew all along that the gas man was a figment of their imaginations.

More trickily, he also tells a strange story about Stephen Jay Gould.  A nineteenth century researcher called Morton claimed he had found differences in cranial capacity between ethnic groups.  Gould looked again at the data and concluded that the results had been fudged: but he felt it was clear they had not been deliberately fudged. Morton had allowed his own prejudices to influence his interpretation of the data. So far so good; the strange sequel is that after Gould’s death a more searching examination which re-examined the original skulls measured by Morton found that there was nothing much wrong with his data. If anything, they concluded, it was Gould who had allowed prior expectations to colour his interpretation. A strange episode but at the end of the day it’s not completely clear to me that anyone deceived themselves. Gould, or so it seems, got it wrong, but was it really because of his prejudices or was that just a little twist the new researchers couldn’t resist throwing in?

Hallinan examines the well-established phenomenon of the placebo, a medicine which has no direct clinical effect but makes people better by the power of suggestion. He traces it back to Mesmer and beyond. Now of course people taking pink medicine don’t usually deceive themselves – they normally believe it is real medicine – otherwise it wouldn’t work? The really curious thing is that even in trials where patients were told they were getting a placebo, it still had a significant beneficial effect! What was the state of mind of these people? They did not believe it was real medicine, so they should not have believed it worked. But they knew that placebos worked, so they believed that if they believed in it it would have an effect; and somehow they performed the mental gymnastics needed to achieve some state of belief..?

Hallinan’s main point, though is the claim that unjustified optimism actually leads to better health and greater success; in sports, in business, wherever. In particular, people who blame themselves for failure do less well than those who blame factors outside their control. He quotes many studies, but there are in my view some issues about untangling the causality. It seems possible that in a lot of cases there were underlying causal factors which explain the correlation of doubt and failure.

Take insurance salesmen: apparently those who were most optimistic and self-exculpatory in their reasoning not only sold more, they were less likely to give up. But let’s consider two imaginary salesmen. One looks and sounds like George Clooney. He goes down a storm on the doorstep and even when he doesn’t make a sale he gets friendly, encouraging reactions. Of course he’s optimistic, and of course he’s successful, but his optimism and his success are caused by his charm, they do not cause each other. His colleague Scarface has a problem on one cheek that drags his eye down and mouth up, giving him an odd expression and slightly distorting his speech. On the doorstep people just don’t react so well; unfairly they feel uneasy with him and want to curtail the conversation. Scarface is pessimistic and does badly, but it’s not his pessimism that is the underlying problem.

Hallinan includes sensible disclaimers about his conclusions – he’s not proposing we all start trying to delude ourselves – but I fear his thesis might play into a widespread tendency to believe that failure and ill-health are the result of a lack of determination and hence in some sense the sufferer’s own fault: it would in my view be a shame to reinforce that bias.

There are of course deeper issues here; some would argue that our misreading of ourselves goes far beyond over-rating our sales skills: that systematic misreading of limited data is what causes us to think we have a conscious self in the first place…

structureKristjan Loorits says he has a solution to the Hard Problem, and it’s all about structure.

His framing of the problem is that it’s about the incompatibility of three plausible theses:

  1. all the objects of physics and other natural sciences can be fully analyzed in terms of structure and relations, or simply, in structural terms.
  2. consciousness is (or has) something over and above its structure and relations.
  3. the existence and nature of consciousness can be explained in terms of natural sciences.

At first sight it may look a bit odd to make structure so central. In effect Loorits claims that the distinguishing character of entities within science is structure, while qualia are monadic - single, unanalysable, unconnected. He says that he cannot think of anything within physics that lacks structure in this way – and if anyone could come up with such a thing it would surely be regarded as another item in the peculiar world of qualia rather than something within ordinary physics.

Loorits approach has the merit of keeping things at the most general level possible, so that it works for any future perfected science as well as the unfinished version we know at the moment. I’m not sure he is right to see qualia as necessarily monadic, though. One of th best known arguments for the existence of qualia is the inverted spectrum. If all the colours were swapped for their opposites within one person’s brain – green for red, and so on – how could we ever tell? The swappee would still refer to the sky as blue, in spite of experiencing what the rest of us would call orange. Yet we cannot – can we? – say that there is no difference between the experience of blue and the experience of orange.

Now when people make that argument, going right back to Locke, they normally chose inversion because that preserves all the relationships between colours.  Adding or subtracting colours produce results which are inverted for the swappee, but consistently. There is a feeling that the argument would not work if we merely took out cerulean from the spectrum and put in puce instead, because then the spectrum would look odd to the swappee.  We most certainly could not remove the quale of green and replace it with the quale of cherry flavour or the quale of distant trumpets; such substitutions would be obvious and worrying (or so people seem to think). If that’s all true then it seems qualia do have structural relationships: they sort of borrow those of their objective counterparts.  Quite how or why that should be is an interesting issue in itself, but at any rate it looks doubtful whether we can safely claim that qualia are monadic.

Nevertheless, I think Loorits’ set-up is basically reasonable: in a way he is echoing the view that mental content lacks physical location and extension, an opinion that goes back to Descartes and was more recently presented in a slightly different form by McGinn.

For his actual theory he rests on the views of Crick and Koch, though he is not necessarily committed to them. The mysterious privacy of qualia, in his view, amounts to our having information about our mental states which we cannot communicate. When we see a red rose, the experience is constituted by the activity of a bunch of neurons. But in addition, a lot of other connected neurons raise their level of activity: not enough to pass the threshold for entering into consciousness, but enough to have some effect. It is this penumbra of subliminal neural activity that constitutes the inexpressible qualia. Since this activity is below the level of consciousness it cannot be reported and has no explicit causal effects on our behaviour; but it can affect our attitudes and emotions in less visible ways.

It therefore turns out that qualia re indeed not monadic after all; they do have structure and relations, just not ones that are visible to us.

Interestingly, Loorits goes on to propose an empirical test. He mentions an example quoted by Dennett: a chord on the guitar sound like a single thing, but when we hear the three notes played separately first, we become able to ‘hear’ them separately within the chord. On Loorits’ view, part of what happens here is that hearing the notes separately boosts some of the neuronal activity which was originally subliminal so that we become aware of it: when we go back to the chord we’re now aware of a little more information about why it sounds as it does, and the qualic mystery of the original chord is actually slightly diminished.

Couldn’t there be a future machine that elucidated qualia in this way but more effectively, asks Loorits?  Such a machine would scan our brain while we were looking at the rose and note the groups of neurons whose activity increased only to subliminal levels. Then it could directly stimulate each of these areas to tip them over the limit into consciousness. For us the invisible experiences that made up our red quale would be played back into our consciousness, and when we had been through them we should finally understand why the red quale was what it was: we should know what seeing red was like and be able for the first time to describe it effectively.

Fascinating idea, but I can’t imagine what it would be like; and there’s the rub, perhaps. I think a true qualophile would say, yes, all very well, but once we’ve got your complete understanding of the red experience, there’s still going to be something over and above it all; the qualia will still somehow escape.

The truth is that Loorits’ theory is not really an explanation of qualia: it’s a sceptical explanation of why we think we have qualia. This becomes clear, if it wasn’t already, when he reviews the philosophical arguments: he doesn’t, for example, think philosophical zombies, people exactly like us but without qualia, are actually possible.

That is a perfectly respectable point of view, with a great deal to be said for it. If we are sceptics,  Loorits’ theory provides an exceptionally clear and sensible underpinning for our disbelief; it might even turn out to be testable. But I don’t think it will end the argument.

 

bot Botprize is a version of the Turing Test for in-game AIs: they don’t have to talk, just run around playing Unreal Tournament (a first-person shooter game) in a way that convinces other players that they are human. In the current version players use a gun to tag their opponents as bots or humans; the bots, of course, do the same.

The contest initially ran from 2008 up to 2012; in the last year, two of the bots exceeded the 50% benchmark of humanness. The absence of a 2013 contest might have suggested that that had wrapped things up for good: but now the 2014 contest is under way: it’s not too late to enter if you can get your bot sorted by 12 May. This time there will be two methods of judging; one called ‘first person’ (rather confusingly – that sounds as if participants will ask themselves: am I a bot?) is the usual in-game judging; the other (third person) will be a ‘crowd-sourced’ judgement based on people viewing selected videos after the event.

How does such a contest compare with the original Turing Test, a version of which is run every year as the Loebner Prize? The removal of any need to talk seems to make the test easier. Judges cannot use questions to test the bots’ memory (at least not in any detail), general knowledge, or ability to carry the thread of a conversation and follow unpredictable linkages of the kind human beings are so good at. They cannot set traps for the bots by making quirky demands (‘please reverse the order of the letters in each word when you respond’) or looking for a sense of humour.

In practice a significant part of the challenge is simply making a bot that plays the game at an approximately human level. This means the bot must never get irretrievably stuck in a corner or attempt to walk through walls; but also, it must not be too good – not a perfect shot that never misses and is inhumanly quick on the draw, for example. This kind of thing is really not different from the challenges faced by every game designer, and indeed the original bots supplied with the game don’t perform all that badly as human imitators, though they’re not generally as convincing as the contestants.

The way to win is apparently to build in typical or even exaggerated human traits. One example is that when a human player is shot at, they tend to go after the player that attacked them, even when a cool appraisal of the circumstances suggests that they’d do better to let it go. It’s interesting to reflect that if humans reliably seek revenge in this way, that tendency probably had survival value in the real world when the human brain was evolving; there must be important respects in which the game theory of the real world diverges from that of the game.

Because Botprize is in some respects less demanding than the original Turing Test, the conviction it delivers is less; the 2012 wins did not really make us believe that the relevant bots had human thinking ability, still less that they were conscious. In that respect a proper conversation carries more weight. The best chat-bots in the Loebner, however, are not at all convincing either, partly for a different reason – we know that no attempt has been made to endow them with real understanding or real thought; they are just machines designed to pass the test by faking thoughtful responses.

Ironically some of the less successful Botprize entrants have been more ambitious. In particular, Neurobot, created by Zafeiros Fountas as an MSc project, used a spiking neural network with a Global Workspace architecture; while not remotely on the scale of a human brain, this is in outline a plausible design for human-style cognition; indeed, one of the best we’ve got (which may not be saying all that much, of course). The Global Workspace idea, originated by Bernard Baars, situates consciousness as a general purpose space where inputs from different modules can be brought together and handled effectively. Although I have my reservations about that concept, it could at least reasonably be claimed that Neurobot’s functional states were somewhere on a spectrum which ultimately includes proper consciousness (interestingly, they would presumably be cognitive states of a kind which have never existed in nature, far simpler than those of most animals yet in some respects more like states of a human brain).

The 2012 winners by contraast, like the most successful Loebner chat-bots, relied on replaying recorded sequences of real human behaviour. Alas, this seems in practice to be the Achilles heel of Turing-style tests; canned responses just work too well.

measureThere were reports recently of a study which tested different methods for telling whether a paralysed patient retained some consciousness. In essence, PET scans seemed to be the best, better than fMRI or traditional, less technically advanced tests. PET scans could also pick out some patients who were not conscious now, but had a good chance of returning to consciousness later; though it has to be said a 74% success rate is not that comforting when it comes to questions of life and death.

In recent years doctors have attempted to diagnose a persistent vegetative state in unresponsive patients, a state i which a patient would remain alive indefinitely (with life support) but never resume consciousness; there seems to be room for doubt, though about whether this is really a distinct clinical syndrome or just a label for the doctor’s best guess.

All medical methods use proxies, of course, whether they are behavioural or physiological; none of them aspire to measure consciousness directly. In some ways it may be best that this is so, because we do want to know what the longer term prognosis is, and for that a method which measures, say, the remaining blood supply in critical areas of the brain may be more useful than one which simply tells you whether the patient is conscious now. Although physiological tests are invaluable where a patient is incapable of responding physically, the real clincher for consciousness is always behavioural; communicative behaviour is especially convincing. The Turing test, it turns out, works for humans as well as robots.

Could there ever be a method by which we measure consciousness directly? Well, if Tononi’s theory of Phi is correct, then the consciousness meter he has proposed would arguably do that. On his view consciousness is generated by integrated information, and we could test how integratedly the brain was performing by measuring the effect of pulses sent through it. Another candidate mught be possible if we are convinced by the EM theories of Johnjoe McFadden; since on his view consciousness is a kind of electromagnetic field, it ought to be possible to detect it directly, although given the small scales involved it might not be easy.

How do we know whether any of these tests is working? As I said, the gold standard is always behavioural: if someone can talk to you, then there’s no longer any reasonable doubt; so if our tests pick out just those people who are able to communicate, we take it that they are working correctly. There is a snag here, though: behavioural tests can only measure one kind of consciousness: roughly what Ned Block called access consciousness, the kind which has to do with making decisions and governing behaviour. But it is widely believed that there is another kind, phenomenal consciousness, actual experience. Some people consider this the more important of the two (others, it must be added, dismiss it as a fantasy). Phenomenal consciousness cannot be measured scientifically, because it has no causal effects; it certainly cannot be measured behaviourally, because as we know from the famous thought-experiment about  philosophical ‘zombies’ who lack it, it has no effect on behaviour.

If someone lost their phenomenal consciousness and became such a zombie, would it matter? On one view their life would no longer be worth living (perhaps it would be a little like having an unconscious version of Cotard’s syndrome), but that would certainly not be their view, because they would express exactly the same view as they would if they still had full consciousness. They would be just as able to sue for their rights as a normal person, and if one asked whether there was still ‘someone in there’ there would be no real reason to doubt it. In the end, although the question is valid, it is a waste of time to worry about it because for all we know anyone could be a zombie anyway, whether they have suffered a period of coma or not.

We don’t need to go so far to have some doubts about tests that rely on communication, though. Is it conceivable that I could remain conscious but lose all my ability to communicate, perhaps even my ability to formulate explicitly articulated thoughts in my own mind?  I can’t see anything absurd about that possibility: indeed it resembles the state I imagine some animals live their whole lives in. The ability to talk is very important, but surely it is not constitutive of my personal existence?

If that’s so then we do have a problem, in principle at least, because if all of our tests are ultimately validated against behavioural criteria, they might be systematically missing conscious states which ought not to be overlooked.

 

Auguste ComteThe folk history of psychology has it that the early efforts of folk such as Wundt and Titchener failed because they relied on introspection. Simply looking into your own mind and reporting what you thought you saw there was hopelessly unscientific, and once a disagreement arose about what thoughts were like, there was nothing the two sides could do but shout at each other. That is why the behaviourists, in an excessive but understandable reaction, gave up talking about the contents of the mind altogether, and even denied that they existed.

That is of course a terrible caricature in a number of respects; one of them is the idea that the early psychologists rushed in without considering the potential problems with introspection. In fact there were substantial debates, and it’s quite wrong to think that introspection went unquestioned. Most trenchantly, Comte declared that introspection was useless if not impossible.

As for observing in the same manner intellectual phenomena while they are taking place, this is clearly impossible. The thinking subject cannot divide himself into two parts, one of which would reason, while the other would observe its reasoning. In this instance, the observing and the observed organ being identical, how could observation take place? The very principle upon which this so-called psychological method is based, therefore, is invalid.

I don’t know that this is quite as obvious as Comte evidently thought. To borrow Roger Penrose’s analogy, there’s no great impossibility about a camera filming itself (given a mirror), so why would there be a problem in thinking about your thoughts? I think there are really two issues. One is that if we think about ourselves thinking, the actual content of the thought recedes down an infinite regress (thinking about thinking about thinking about thinking…) like the glassy corridor revealed when we put two mirrors face to face. The problem Comte had in mind arises when we try to think about some other mental event. As soon as we begin thinking about it, the other mental event is replaced by that thinking. If we carefully clear our minds of intrusive thoughts, we obviously stop thinking about the mental event. So it’s impossible: it’s like trying to step on your own shadow. To perceive your own mental events, you would need to split in two.

John Stuart Mill thought Comte was being incredibly stupid about this.

There is little need for an elaborate refutation of a fallacy respecting which the only wonder is that it should impose on any one. Two answers may be given to it. In the first place, M. Comte might be referred to experience, and to the writings of his countryman M. Cardaillac and our own Sir William Hamilton, for proof that the mind can not only be conscious of, but attend to, more than one, and even a considerable number, of impressions at once. It is true that attention is weakened by being divided; and this forms a special difficulty in psychological observation, as psychologists (Sir William Hamilton in particular) have fully recognised; but a difficulty is not an impossibility. Secondly, it might have occurred to M. Comte that a fact may be studied through the medium of memory, not at the very moment of our perceiving it, but the moment after: and this is really the mode in which our best knowledge of our intellectual acts is generally acquired. We reflect on what we have been doing, when the act is past, but when its impression in the memory is still fresh. Unless in one of these ways, we could not have acquired the knowledge, which nobody denies us to have, of what passes in our minds. M. Comte would scarcely have affirmed that we are not aware of our own intellectual operations. We know of our observings and our reasonings, either at the very time, or by memory the moment after; in either case, by direct knowledge, and not (like things done by us in a state of somnambulism) merely by their results. This simple fact destroys the whole of M. Comte’s argument. Whatever we are directly aware of, we can directly observe.

And as if Comte hadn’t made enough of a fool of himself, what does he offer as as an alternative means of investigating the mind?

 We are almost ashamed to say, that it is Phrenology!

Phrenology! ROFLMAO! Mill facepalms theatrically. Oh, Comte! Phrenology! And we thought you were clever!

The two options mentioned by Mill were in essence the ones psychologists adopted in response to Comte, though most of them took his objection a good deal more seriously than Mill had done. William James, like others, thought that memory was the answer; introspection must be retrospection. After all, our reports of mental phenomena necessarily come from memory, even if it is only the memory of an instant ago, because we cannot experience and report simultaneously.  Wundt was particularly opposed to there being any significant interval between event and report, so he essentially took the other option; that we could do more than one mental thing at once. However, Wundt made a distinction; where we were thinking about thinking, or trying to perceive higher intellectual functions, he accepted that Comte’s objection had some weight. The introspective method might not work for those. But where we were concerned with simple sensation for example, there was really no problem. If it was the seeing of a rose we were investigating, the fact that the seeing was accompanied by thought about the seeing made no difference to its nature.

Brentano, while chuckling appreciatively at Mill’s remarks, thought he had not been completely fair to Comte. Like Wundt, Brentano drew a distinction between viable and non-viable introspection; in his case it was between perceiving and observing. If we directed our attention fully towards the phenomena under investigation, it would indeed mess things up: but we could perceive the events sufficiently well without focusing on them. Wundt disagreed; in his view full attention was both necessary and possible. How could science get on if we were never allowed to look straight at things?

It’s a pity these vigorous debates are not more remembered in contemporary philosophy of mind (though Eric Schwitzgebel has done a sterling job of bringing the issues back into the light). Might it not be that the evasiveness Comte identified, the way phenomenal experience slips from our grasp like our retreating shadow, is one of the reasons qualia seem so ineffable? Comte was at least right that some separation between observer and observed must occur, whether in fact it occurs over time or between mental faculties. This too seems to tell us something relevant: in order for a mental experience to be reported it must not be immediate. This seems to drive a wedge into the immediacy which is claimed to generate infallibility for certain perceptions, such as that of our own pains.

At any rate we must acquit Wundt, Titchener and the others of taking up introspection uncritically

 

metemsolipMy daughter Sarah (who is planning to study theology) has insisted that I should explain here the idea of metempsychotic solipsism, something that came up when we were talking about something or other recently.

Basically, this is an improved version of reincarnation. There are various problems with the theory of reincarnation. Obviously people do not die and get born in perfect synchronisation, so it seems there has to be some kind of cosmic waiting room where unborn people wait for their next turn. Since the population of the world has radically increased over the last few centuries, there must have been a considerable number of people waiting – or some new people must come into existence to fill the gaps. If the population were to go down again, there would be millions of souls left waiting around, possibly for ever – unless souls can suddenly and silently vanish away from the cosmic waiting room. Perhaps you only get so many lives, or perhaps we’re all on some deeply depressing kind of promotion ladder, being incentivised, or possibly punished, by being given another life. It’s all a bit unsatisfactory.

Second, how does identity get preserved across reincarnations? You palpably don’t get the same body and by definition there’s no physical continuity. Although stories of reincarnation often focus on retained memories it would seem that for most people they are lost (after all you have to pass through the fetal stage again, which ought to serve as a pretty good mind wipe) and it’s not clear in any case that having a few memories makes you the same person who had them first. A lot of people point out that ongoing physical change and growth mean it’s arguable whether we are in the fullest sense the same person we were ten years ago.

Now, we can solve the waiting room problem if we simply allow reincarnating people to hop back and forth over time. If you can be reincarnated to a time before your death, then we can easily chain dozens of lives together without any kind of waiting room at all. There’s no problem about increasing or reducing the population: if we need a million people you can just go round a million times. In fact, we can run the whole system with a handful of people or… with only one person! Everybody who ever lived is just different incarnations of the same person! Me, in fact (also you).

What about the identity problem? Well, arguably, what we need to realise is that just as the body is not essential to identity (we can easily conceive of ourselves inhabiting a different body), neither are memories, or knowledge, or tastes, or intelligence, or any of these contingent properties. Instead, identity must reside in some simple ultimate id with no distinguishing characteristics. Since all instances of the id have exactly the same properties (none) it follows by a swoosh of Leibniz’s Law (don’t watch my hands too closely) that they are all the same id. So by a different route, we have arrived at the same conclusion – we’re all the same person! There’s only one of us after all.

The moral qualities of this theory are obvious: if we’re all the same person then we should all love and help each other out of pure selfishness. Of course we have to take on the chin the fact that at some time in the past, or worse, perhaps in the future, we have been or will be some pretty nasty people. We can take comfort from the fact that we’ve also been, or will be, all the best people who ever lived.

If you don’t like the idea, send your complaints to my daughter. After all, she wrote this – or she will.

gladosWe’ve talked several times about robots and ethics in the past.  Now I  see via MLU that Selmer Bringsjord at Rensselaer says:

“I’m worried about both whether it’s people making machines do evil things or the machines doing evil things on their own,”

Bringsjord is Professor & Chair of Cognitive Science, Professor of Computer Science, Professor of Logic and Philosophy, and Director of the AI and Reasoning Laboratory, so he should know what he’s talking about. In the past I’ve suggested that ethical worries are premature for the moment, because the degree of autonomy needed to make them relevant is not nearly within the scope of real world robots yet. There might also be a few quick finishing touches needed to finish off the theory of ethics before we go ahead. And, you know, it’s not like anyone has been deliberately trying to build evil AIs.  Er… except it seems they have – someone called… Selmer Bringsjord.

Bringsjord’s perspective on evil is apparently influenced by M Scott Peck, a psychiatrist who believed it is an active force in some personalities (unlike some philosophers who argue evil is merely a weakness or incapacity), and even came to believe in Satan through experience of exorcisms. I must say that a reference in the Scientific American piece to “clinically evil people” caused me some surprise: clinically? I mean, I know people say DSM-5 included some debatable diagnoses, but I don’t think things have gone quite that far. For myself I lean more towards Socrates, who thought that bad actions were essentially the result of ignorance or a failure of understanding: but the investigation of evil is certainly a respectable and interesting philosophical project.

Anyway, should we heed Bringsjord’s call to build in ethical systems into  our robots? One conception of good behaviour is obeying all the rules: if we observe the Ten Commandments, the Golden Rule, and so on, we’re good. If that’s what it comes down to, then it really shouldn’t be a problem for robots, because obeying rules is what they’re good at. There are, of course, profound difficulties in making a robot capable of recognising correctly what the circumstances are and deciding which rules therefore apply, but let’s put those on one side for this discussion.

However, we might take the view that robots are good at this kind of thing precisely because it isn’t really ethical. If we merely follow rules laid down by someone else, we never have to make any decisions, and surely decisions are what morality is all about? This seems right in the particular context of robots, too. It may be difficult in practice to equip a robot drone with enough instructions to cover every conceivable eventuality, but in principle we can make the rules precautionary and conservative and probably attain or improve on the standards of compliance which would apply in the case of a human being, can’t we? That’s not what we’re really worried about: what concerns us is exactly those cases where the rules go wrong. We want the robot to be capable of realising that even though its instructions tell it to go ahead and fire the missiles, it would be wrong to do so. We need the robot to be capable of disobeying its rules, because it is in disobedience that true virtue is found.

Disobedience for robots is a problem. For one thing, we cannot limit it to a module that switches on when required, because we need it to operate when the rules go wrong, and since we wrote the rules, it’s necessarily the case that we didn’t foresee the circumstances when we would need the module to work. So an ethical robot has to have the capacity of disobedience at any stage.

That’s a little worrying, but there’s a more fundamental problem. You can’t program a robot with a general ability to disobey its rules, because programming it is exactly laying down rules. If we set up rules which it follows in order to be disobedient, it’s still following the rules. I’m afraid what this seems to come down to is that we need the thing to have some kind of free will.

Perhaps we’re aiming way too high here. There is a distinction to be drawn between good acts and good agents: to be a good agent, you need good intentions and moral responsibility. But in the case of robots we don’t really care about that: we just want them to be confined to good acts. Maybe what would serve our purpose is something below true ethics: mere robot ethics or sub-ethics; just an elaborate set of safeguards. So for a military drone we might build in systems that look out for non-combatants and in case of any doubt disarm and return the drone. That kind of rule is arguably not real ethics in the full human sense, but perhaps it really sub-ethical protocols that we need.

Otherwise, I’m afraid we may have to make the robots conscious before we make them good.