Self Deception

self deceptionJoseph T Hallinan’s new book Kidding Ourselves says that not only is self deception more common and more powerful than we suppose, it’s actually helpful: deluded egoists beat realists every time.

Philosophically, of course, self-deception is impossible. To deceive yourself you have to induce in yourself a belief in a proposition you know to be false. In other words you have to believe and disbelieve the same thing, which is contradictory. In practice self-deception of a looser kind is possible if there is some kind of separation between the deceiving and deceived self. So for example there might be separation over time: we set up a belief for ourselves which is based on certain conditions; later on we retain the belief but have forgotten the conditions that applied. Or the separation might be between conscious and unconscious, with our unrecognised biases and preferences causing us to believe things which we could not accept if we were to subject them to a full and rational examination. As another example, we might well call it self deception if we choose to behave as if we believed something which in fact we don’t believe.

Hallinan’s examples are a bit of a mixed bag, and many of them seem to be simple delusions rather than self-delusions. He recounts, for example the strange incident in 1944 when many of the citizens of a small town in Illinois began to believe they were being attacked by a man using gas – one that most probably never existed at all. It’s a peculiar case that certainly tells us something about human suggestibility, but apparently nothing about self-deception; there’s no reason to think these people knew all along that the gas man was a figment of their imaginations.

More trickily, he also tells a strange story about Stephen Jay Gould.  A nineteenth century researcher called Morton claimed he had found differences in cranial capacity between ethnic groups.  Gould looked again at the data and concluded that the results had been fudged: but he felt it was clear they had not been deliberately fudged. Morton had allowed his own prejudices to influence his interpretation of the data. So far so good; the strange sequel is that after Gould’s death a more searching examination which re-examined the original skulls measured by Morton found that there was nothing much wrong with his data. If anything, they concluded, it was Gould who had allowed prior expectations to colour his interpretation. A strange episode but at the end of the day it’s not completely clear to me that anyone deceived themselves. Gould, or so it seems, got it wrong, but was it really because of his prejudices or was that just a little twist the new researchers couldn’t resist throwing in?

Hallinan examines the well-established phenomenon of the placebo, a medicine which has no direct clinical effect but makes people better by the power of suggestion. He traces it back to Mesmer and beyond. Now of course people taking pink medicine don’t usually deceive themselves – they normally believe it is real medicine – otherwise it wouldn’t work? The really curious thing is that even in trials where patients were told they were getting a placebo, it still had a significant beneficial effect! What was the state of mind of these people? They did not believe it was real medicine, so they should not have believed it worked. But they knew that placebos worked, so they believed that if they believed in it it would have an effect; and somehow they performed the mental gymnastics needed to achieve some state of belief..?

Hallinan’s main point, though is the claim that unjustified optimism actually leads to better health and greater success; in sports, in business, wherever. In particular, people who blame themselves for failure do less well than those who blame factors outside their control. He quotes many studies, but there are in my view some issues about untangling the causality. It seems possible that in a lot of cases there were underlying causal factors which explain the correlation of doubt and failure.

Take insurance salesmen: apparently those who were most optimistic and self-exculpatory in their reasoning not only sold more, they were less likely to give up. But let’s consider two imaginary salesmen. One looks and sounds like George Clooney. He goes down a storm on the doorstep and even when he doesn’t make a sale he gets friendly, encouraging reactions. Of course he’s optimistic, and of course he’s successful, but his optimism and his success are caused by his charm, they do not cause each other. His colleague Scarface has a problem on one cheek that drags his eye down and mouth up, giving him an odd expression and slightly distorting his speech. On the doorstep people just don’t react so well; unfairly they feel uneasy with him and want to curtail the conversation. Scarface is pessimistic and does badly, but it’s not his pessimism that is the underlying problem.

Hallinan includes sensible disclaimers about his conclusions – he’s not proposing we all start trying to delude ourselves – but I fear his thesis might play into a widespread tendency to believe that failure and ill-health are the result of a lack of determination and hence in some sense the sufferer’s own fault: it would in my view be a shame to reinforce that bias.

There are of course deeper issues here; some would argue that our misreading of ourselves goes far beyond over-rating our sales skills: that systematic misreading of limited data is what causes us to think we have a conscious self in the first place…

Structural Qualia

structureKristjan Loorits says he has a solution to the Hard Problem, and it’s all about structure.

His framing of the problem is that it’s about the incompatibility of three plausible theses:

  1. all the objects of physics and other natural sciences can be fully analyzed in terms of structure and relations, or simply, in structural terms.
  2. consciousness is (or has) something over and above its structure and relations.
  3. the existence and nature of consciousness can be explained in terms of natural sciences.

At first sight it may look a bit odd to make structure so central. In effect Loorits claims that the distinguishing character of entities within science is structure, while qualia are monadic – single, unanalysable, unconnected. He says that he cannot think of anything within physics that lacks structure in this way – and if anyone could come up with such a thing it would surely be regarded as another item in the peculiar world of qualia rather than something within ordinary physics.

Loorits approach has the merit of keeping things at the most general level possible, so that it works for any future perfected science as well as the unfinished version we know at the moment. I’m not sure he is right to see qualia as necessarily monadic, though. One of th best known arguments for the existence of qualia is the inverted spectrum. If all the colours were swapped for their opposites within one person’s brain – green for red, and so on – how could we ever tell? The swappee would still refer to the sky as blue, in spite of experiencing what the rest of us would call orange. Yet we cannot – can we? – say that there is no difference between the experience of blue and the experience of orange.

Now when people make that argument, going right back to Locke, they normally chose inversion because that preserves all the relationships between colours.  Adding or subtracting colours produce results which are inverted for the swappee, but consistently. There is a feeling that the argument would not work if we merely took out cerulean from the spectrum and put in puce instead, because then the spectrum would look odd to the swappee.  We most certainly could not remove the quale of green and replace it with the quale of cherry flavour or the quale of distant trumpets; such substitutions would be obvious and worrying (or so people seem to think). If that’s all true then it seems qualia do have structural relationships: they sort of borrow those of their objective counterparts.  Quite how or why that should be is an interesting issue in itself, but at any rate it looks doubtful whether we can safely claim that qualia are monadic.

Nevertheless, I think Loorits’ set-up is basically reasonable: in a way he is echoing the view that mental content lacks physical location and extension, an opinion that goes back to Descartes and was more recently presented in a slightly different form by McGinn.

For his actual theory he rests on the views of Crick and Koch, though he is not necessarily committed to them. The mysterious privacy of qualia, in his view, amounts to our having information about our mental states which we cannot communicate. When we see a red rose, the experience is constituted by the activity of a bunch of neurons. But in addition, a lot of other connected neurons raise their level of activity: not enough to pass the threshold for entering into consciousness, but enough to have some effect. It is this penumbra of subliminal neural activity that constitutes the inexpressible qualia. Since this activity is below the level of consciousness it cannot be reported and has no explicit causal effects on our behaviour; but it can affect our attitudes and emotions in less visible ways.

It therefore turns out that qualia re indeed not monadic after all; they do have structure and relations, just not ones that are visible to us.

Interestingly, Loorits goes on to propose an empirical test. He mentions an example quoted by Dennett: a chord on the guitar sound like a single thing, but when we hear the three notes played separately first, we become able to ‘hear’ them separately within the chord. On Loorits’ view, part of what happens here is that hearing the notes separately boosts some of the neuronal activity which was originally subliminal so that we become aware of it: when we go back to the chord we’re now aware of a little more information about why it sounds as it does, and the qualic mystery of the original chord is actually slightly diminished.

Couldn’t there be a future machine that elucidated qualia in this way but more effectively, asks Loorits?  Such a machine would scan our brain while we were looking at the rose and note the groups of neurons whose activity increased only to subliminal levels. Then it could directly stimulate each of these areas to tip them over the limit into consciousness. For us the invisible experiences that made up our red quale would be played back into our consciousness, and when we had been through them we should finally understand why the red quale was what it was: we should know what seeing red was like and be able for the first time to describe it effectively.

Fascinating idea, but I can’t imagine what it would be like; and there’s the rub, perhaps. I think a true qualophile would say, yes, all very well, but once we’ve got your complete understanding of the red experience, there’s still going to be something over and above it all; the qualia will still somehow escape.

The truth is that Loorits’ theory is not really an explanation of qualia: it’s a sceptical explanation of why we think we have qualia. This becomes clear, if it wasn’t already, when he reviews the philosophical arguments: he doesn’t, for example, think philosophical zombies, people exactly like us but without qualia, are actually possible.

That is a perfectly respectable point of view, with a great deal to be said for it. If we are sceptics,  Loorits’ theory provides an exceptionally clear and sensible underpinning for our disbelief; it might even turn out to be testable. But I don’t think it will end the argument.

 

The bots are back in town…

bot Botprize is a version of the Turing Test for in-game AIs: they don’t have to talk, just run around playing Unreal Tournament (a first-person shooter game) in a way that convinces other players that they are human. In the current version players use a gun to tag their opponents as bots or humans; the bots, of course, do the same.

The contest initially ran from 2008 up to 2012; in the last year, two of the bots exceeded the 50% benchmark of humanness. The absence of a 2013 contest might have suggested that that had wrapped things up for good: but now the 2014 contest is under way: it’s not too late to enter if you can get your bot sorted by 12 May. This time there will be two methods of judging; one called ‘first person’ (rather confusingly – that sounds as if participants will ask themselves: am I a bot?) is the usual in-game judging; the other (third person) will be a ‘crowd-sourced’ judgement based on people viewing selected videos after the event.

How does such a contest compare with the original Turing Test, a version of which is run every year as the Loebner Prize? The removal of any need to talk seems to make the test easier. Judges cannot use questions to test the bots’ memory (at least not in any detail), general knowledge, or ability to carry the thread of a conversation and follow unpredictable linkages of the kind human beings are so good at. They cannot set traps for the bots by making quirky demands (‘please reverse the order of the letters in each word when you respond’) or looking for a sense of humour.

In practice a significant part of the challenge is simply making a bot that plays the game at an approximately human level. This means the bot must never get irretrievably stuck in a corner or attempt to walk through walls; but also, it must not be too good – not a perfect shot that never misses and is inhumanly quick on the draw, for example. This kind of thing is really not different from the challenges faced by every game designer, and indeed the original bots supplied with the game don’t perform all that badly as human imitators, though they’re not generally as convincing as the contestants.

The way to win is apparently to build in typical or even exaggerated human traits. One example is that when a human player is shot at, they tend to go after the player that attacked them, even when a cool appraisal of the circumstances suggests that they’d do better to let it go. It’s interesting to reflect that if humans reliably seek revenge in this way, that tendency probably had survival value in the real world when the human brain was evolving; there must be important respects in which the game theory of the real world diverges from that of the game.

Because Botprize is in some respects less demanding than the original Turing Test, the conviction it delivers is less; the 2012 wins did not really make us believe that the relevant bots had human thinking ability, still less that they were conscious. In that respect a proper conversation carries more weight. The best chat-bots in the Loebner, however, are not at all convincing either, partly for a different reason – we know that no attempt has been made to endow them with real understanding or real thought; they are just machines designed to pass the test by faking thoughtful responses.

Ironically some of the less successful Botprize entrants have been more ambitious. In particular, Neurobot, created by Zafeiros Fountas as an MSc project, used a spiking neural network with a Global Workspace architecture; while not remotely on the scale of a human brain, this is in outline a plausible design for human-style cognition; indeed, one of the best we’ve got (which may not be saying all that much, of course). The Global Workspace idea, originated by Bernard Baars, situates consciousness as a general purpose space where inputs from different modules can be brought together and handled effectively. Although I have my reservations about that concept, it could at least reasonably be claimed that Neurobot’s functional states were somewhere on a spectrum which ultimately includes proper consciousness (interestingly, they would presumably be cognitive states of a kind which have never existed in nature, far simpler than those of most animals yet in some respects more like states of a human brain).

The 2012 winners by contraast, like the most successful Loebner chat-bots, relied on replaying recorded sequences of real human behaviour. Alas, this seems in practice to be the Achilles heel of Turing-style tests; canned responses just work too well.