sorates and branestawmQuentin Ruyant has written a thoughtful piece about quantum mechanics and philosophy of mind: in a nutshell he argues both that quantum theory may be relevant to the explanation of consciousness and that consciousness may be relevant to the interpretation of quantum theory.

Is quantum theory relevant to consciousness? Well. of course some people have said so, notably Sir Roger Penrose and Stuart Hameroff.  I think Ruyant is right, though, that the majority of philosophers and probably the majority of physicists dismiss the idea that quantum theory might be needed to explain consciousness. People often suggest that the combination of the two only appeals because both are hard to explain: ‘here’s one mystery and here’s another: maybe one explains the other’. Besides, people say, the brain is far too big and hot and messy for anything other than classical physics to be required

In making the case for the relevance of quantum theory, Ruyant relies on the Hard Problem.  His position is that the Hard Problem is not biological but a matter of physics, whereas the Easy Problem, to do with all the scientifically tractable aspects of consciousness, can be dealt with by biology or psychology.

Actually, turning aside from the main thread of Ruyant’s argument, there are some reasons to suggest that quantum physics is relevant to the Easy Problem. Penrose’s case, in fact, seems to suggest just that: in his view consciousness is demonstrably non-computable and some kind of novel quantum mechanics is his favoured candidate to fill the gap. Penrose’s examples, things like solving mathematical problems, look like ‘Easy’ Problem matters to me.

Although I don’t think anyone (including me) advocates the idea, it also seems possible to argue that the ‘spooky action at a distance’ associated with quantum entanglement might conceivably have something to tell us about intentionality and its remarkable power to address things that are remote and not directly connected with us.

Anyway, Ruyant is mainly concerned with the Hard Problem, and his argument is that metaphysics and physics are closely related. Topics like the essential nature of physical things straddle the borderline between the two subjects, and it is not at all implausible therefore that the deep physics of quantum mechanics might shed light on the deep metaphysics of phenomenal experience. It seems to me a weakish line of argument, possibly tinged with a bit of prejudice: some physicists are inclined to feel that while their subject deals with the great fundamentals, biology deals only with the chance details of life; sort of a more intellectual kind of butterfly collecting.  That kind of thinking is not really well founded, and it seems particularly odd to think that biology is irrelevant when considering a phenomenon that, so far as we know, appears only in animals and is definitely linked very strongly with the operation of the brain. John Searle for one argues that ‘Hard Problem’ consciousness arises from natural biological properties of brain tissue. We don’t yet know what those properties are, but in his view it’s absurd to think that the job of nerves could equally well be performed by beer cans and string. Ruth Millikan, somewhat differently, has argued that consciousness is purely biological in nature, arising from and defined by evolutionary needs.

I think the truth is that it’s difficult to get anywhere at this meta-theoretical level:  we don’t really decide what kind of theory is most likely to be right and then concentrate on that area; we decide what the true theory most likely is and then root for the kind of theory it happens to be. That, to a great extent, is why quantum theories are not very popular: no-one has come up with a particular one that is cogent and appealing.  It seems to me that Ruyant likes the idea of physics-based theories because he favours panpsychism, or panphenomenalism, and so is inclined to think that the essential nature of matter is likely to be the right place to look for a theory.

To be honest, though, I doubt whether any kind of science can touch the Hard Problem.  It’s about entities that have no causal properties and are ineffable: how could empirical science ever deal with that? It might well be that a scientist will eventually give us the answer, but if so it won’t be by doing science, because neither classical nor quantum physics can really touch the inexpressible.

Actually, though there is a long shot.  If Colin McGinn is partly on the right track, it may be that consciousness seems mysterious to us simply because we’re not looking at it the right way: our minds won’t conceptualise it correctly. Now the same could be true of quantum theory. We struggle with the interpretation of quantum mechanics, but what if we could reorient our brains so that it simply seemed natural, and we groped instead for an acceptable ‘interpretation’ of spooky classical physics? If we could make such a transformation in our mental orientation, then perhaps consciousness would make sense too? It’s possible, but we’re back to banging two mysteries together in the hope that some spark will be generated.

Ruyant’s general case, that metaphysicians should be informed by our best physics is hard to argue with. At the moment few philosophers really engage with the physics and few physicists really grasp the philosophy. Why do philosophers avoid quantum physics? Partly, no doubt, just because it’s difficult, and relies on mathematics which few philosophers can handle. Partly also, I think there’s an unspoken fear that in learning about quantum physics your intuitions will be trained into accepting a particular weltanschauung that might not be helpful. Connected with that is the fear that quantum physics isn’t really finished or definitive. Where would I be if I came up with a metaphysical system that perfectly supported quantum theory and then a few years later it turns out that I should have been thinking in terms of string theory? Metaphysicians cross their fingers and hope they can deal with the key issues at a level of generality that means they won’t be rudely contradicted by an unexpected advance in physics a few years later.

I suppose what we really need is someone who can come up with a really good specific theory that shows the value of metaphysics informed by physics, but few people are qualified to produce one. I must say that Ruyant seems to be an exception, with an excellent grasp of the theories on both sides of the divide. Perhaps he has a theory of consciousness in his back pocket…?

wiring a neuronA few years ago we noted the remarkable research by Fried, Mukamel, and Kreiman which reproduced and confirmed Libet’s famous research. Libet, in brief, had found good evidence using EEG that a decision to move was formed about half a second before the subject in question became consciously aware of it; Fried et al produced comparable results by direct measurement of neuron firing.

In the intervening years, electrode technology has improved and should now make it possible to measure multiple sites. The scanty details here indicate that Kreiman, with support from MIT, plans to repeat the research in an enhanced form; in particular he proposes to see whether, having identified the formed intention to move, it is then possible to stop it before the action takes place. This resembles the faculty of ‘free won’t’ by which Libet himself hoped to preserve some trace of free will.

From the MIT article it is evident that Kreiman is a determinist and believes that his research confirms that position. It is generally believed that Libet’s findings are incompatible with free will in the sense that they seem to show that consciousness has no effect on our actual behaviour.

That actually sheds an interesting side-light on our view of what free will is. A decision to move still gets made, after all; why shouldn’t it be freely made even though it is unconscious? There’s something unsatisfactory about unconscious free will, it seems. Our desire for free will is a desire to be in control, and by that we mean a desire for the entity that does the talking to be in control. We don’t really think of the unconscious parts of our mind as being us; or at least not in the same way as that gabby part that claims responsibility for everything (the part of me that is writing this now, for example).

This is a bit odd, because the verbal part of our brain obviously does the verbals; it’s strange and unrealistic to think it should also make the decisions, isn’t it? Actually if we are careful to distinguish between the making of the decision and being aware of the decision – which we should certainly do, given that one is clearly a first order mental event and the other equally clearly second order – then it ceases to be surprising that the latter should lag behind the former a bit. Something has to have happened before we can be aware of it, after all.

Our unease about this perhaps relates to the intuitive conviction of our own unity. We want the decision and the awareness to be a single event, we want conscious acts to be, as it were, self- illuminating, and it seems to be that that the research ultimately denies us.

It is the case, of course, that the decisions made in the research are rather weird ones. We’re not often faced with the task of deciding to move our hands at an arbitrary time for no reason. Perhaps the process is different if we are deciding which stocks and shares to buy? We may think about the pros and cons explicitly, and we can see the process by which the conclusion is reached; it’s not plausible that those decisions are made unconsciously and then simply notified to consciousness, is it?

On the other hand, we don’t think, do we, that the process of share-picking is purely verbal? The words flowing through our consciousness are signals of a deeper imaginative modelling, aren’t they? If that is the case, then the words might still be lagging. Perhaps the distinction to be drawn is not really between conscious and unconscious, but between simply conscious and explicitly conscious. Perhaps we just shouldn’t let the talky bit pretend to be the whole of consciousness just because the rest is silent.

platformsThe European Human Brain Project seems to be running into problems. This Guardian report notes that an open letter of protest has been published by 170 unhappy neuroscientists. They are seeking to influence and extend a review that is due, hoping they can get a change of direction. I don’t know a great deal about the relevant EU bureaucracy, but I should think the letter-writers’ chances of success are small, not least because in Henry Markram they’re up against a project leader who is determined, resourceful, and not lacking support of his own. There’s a response to the letter here.

It is a little hard to work out exactly what the disagreement is about; the Guardian seems to smoosh together the current objections of former insiders with the criticisms of those who thought the project was radically premature in the first place. I find myself trying to work out what the protestors want, from Markram’s disparaging remarks about them, rather the way we have to reconstruct some ancient heresies from the rebuttals of the authorities, the only place where details survive.

We’re told the disagreement is between those who study behaviour at a high level and the project leaders who want to build simulations from the bottom up. In particular some cognitive neuroscience projects have been ‘demoted’ to partner status. People say the project has been turned into a technology one: Markram says it always was:  he suggests that piling up more data is useless and that instead he’s doing an ICT project which will provide a platform for integrating the data, and that it’s all coming out of an ICT budget anyway.

Us naive outsiders had picked up the impression that the project had a single clear goal; a working simulation of a whole human brain. That is sort of still there, but reading the response it seems to be a pretty distant aspiration. Apparently a mouse brain is going to be done first, but even that is a way off; it’s all about the platforms. Earlier documents suggest there will actually be six platforms, only one of which is about brain simulation; the others are neuroinformatics, high performance computing, medical informatics, neuromorphic computing, and neurorobotics – fascinating subjects. The implicit suggestion is that this kind of science can’t be done properly just by working in labs and publishing papers, it requires advanced platforms in which research can be integrated. Really? Speaking as a professional bureaucrat myself, I have to say frankly that that sounds uncommonly like the high-grade bollocks emitted by a project leader who has more money than he knows what to do with. The EU in particular is all about establishing unwanted frameworks and common platforms which lie dead in drawers forever after. If people want to share findings, publishing papers is fine (alright, not flawless). If it’s about doing actual research, having all the projects captured by a common platform which might embody common errors and common weaknesses doesn’t sound like a good idea at all. My brain doesn’t know, but my gut says the platforms won’t be much use.

Let’s be honest, I don’t really know what’s going on, but if one were cynical one might suppose that the success of the Human Genome Project made the authorities open to other grand projects, and one on the brain hit the spot. The problem is that we knew what a map of the genome would be like, and we pretty much knew it could be done and how. We don’t have a similarly clear idea relating to the brain. However, the concept was appealing enough to attract a big pot of money, both in the EU and then in the US (an even bigger pot). The people who got control of these pots cannot deliver anything like the map of the human genome, but they can buy in the support of fund-hungry researchers by disbursing some of the gold while keeping the politicians and bureaucrats happy by wrapping everything in the afore-mentioned bollocks. The authors of the protest letter perhaps ought to be criticising the whole idea, but really they’re just upset about being left out. The deeper sceptics who always said the project was premature – though they may have thought they were talking about brain simulation, not a set of integrative platforms – were probably right; but there’s no money in that.

Grand projects like this are probably rarely the best way to control research funding, but they do get funding. Maybe something good somewhere will accidentally get the help it needs; meanwhile we’ll be getting some really great European platforms.

claustrumDoctors at George Washington found by chance recently that stimulating a patient’s claustrum served to disrupt consciousness temporarily (abstract). The patient was being treated for epilepsy, and during this kind of surgery it is normal to use an electrode to stimulate areas of the brain in the target area before surgery to determine their role and help ensure the least possible damage is done to important functions. The claustrum is a sheet-like structure which seems to be well connected to many parts of the brain; Crick and Koch suggested it might be ‘the conductor of the orchestra’ of consciousness.

New Scientist reported this as the discovery of the ‘on/off’ switch for consciousness; but that really doesn’t seem to be the claustrum’s function: there’s no reason at the moment to suppose it is involved in falling asleep, or anaesthesia, or other kinds of unconsciousness, The on/off idea seems more like a relatively desperate attempt to explain the discovery in layman’s terms, reminiscent of the all-purpose generic tabloid newspaper technology report in Michael Frayn’s The Tin Men:

British scientists have developed a “magic box”, it was learned last night. The new wonder device was tested behind locked doors after years of research. Results were said to have exceeded expectations… …The device is switched on and off with a switch which works on the same principle as an ordinary domestic light switch…

Actually, one of the most interesting things about the finding is that the state the patient entered did not resemble sleep or any of those other states; she did not collapse or close her eyes, but instantly stopped reading and became unresponsive – although if she had been asked to perform a repetitive task before stimulation started, she would continue for a few seconds before tailing off. On some occasions she uttered a few incoherent syllables unprompted. This does sound more novel and potentially more interesting than a mere on/off switch. She was unable to report what the experience was like as she had no memory of it afterwards – that squares with the idea that consciousness was entirely absent during stimulation, though it’s fair to note that part of her hippocampus, which has an important role in memory formation, had already been removed.

Could Crick and Koch now be vindicated? It seems likely in part: the claustrum seems at least to have some important role – but it’s not absolutely clear that it is a co-ordinating one. One of the long-running problems for consciousness has been the binding problem: how the different sensory inputs, processed and delivered at different speeds, somehow come together into a smoothly co-ordinated experience. It could be that the claustrum helps with this, though some further explanation would be needed. As a long shot, it might even be that the claustrum is part of the ‘Global Workspace’ of the mind hypothesised by Bernard Baars, an idea that is still regularly invoked and quoted.

But we must be cautious. All we really know is that stimulating the claustrum disrupted consciousness. That does not mean consciousness happens in the claustrum. If you blow up a major road junction near a car factory, production may cease, but it doesn’t mean that the junction was where the cars were manufactured. Looking at it sceptically we might note that since the claustrum is well connected it might provide an effective way of zapping several important areas at once, and it might be the function of one or more of these other areas that is essential to sustaining consciousness.

However, it is surely noteworthy that a new way of being unconscious should have been discovered. It seems an unprecedentedly pure way, with a very narrow focus on high level activity, and that does suggest that we’re close to key functions. It is ethically impossible to put electrodes in anyone’s claustrum for mere research reasons, so the study cannot be directly replicated or followed up; but perhaps the advance of technology will provide another way.

nostrilsSmell is the most elusive of the senses. Sight is beautifully structured and amenable to analysis in terms of consistent geometry and a coherent domain of colours. Smells… how does one smell relate to another? There just seems to be an infinite number of smells, all one of a kind. We can be completely surprised by an unprecedented smell which is like nothing we ever experienced before, in a way we can’t possibly be surprised by a new colour (with some minor possible exceptions). Our olfactory system effortlessly assigns new unique smell experiences to substances that never existed until human beings synthesised them.

There don’t even seem to be any words for smells: or at least, the only way we can talk about them is by referring to “the smell of X”, as in a “smoky smell” or “the smell of lemons”. We don’t have to do that to describe shapes or colours: they can be described as “blue”, or “square” without our having to say they are “sky-coloured” or “the shape of a box”. (Except perhaps in the case of orange? Is “orange” short for ‘the colour of oranges’?) Even for taste we have words like “bitter” and “sweet”. The only one I can think of for smells is “nidorous’, which is pretty obscure – and in order to explain it I have to fall back on saying it describes the “smell of” burning/cooking meat. All we have to describe smells is “strong” and “faint” (my daughter, reading over my shoulder, says what about “pungent”? She does not consider “pungent” to be merely a synonym of “strong” – you may be indifferent to a strong smell, but not to a pungent one, she claims).

With that by way of preamble, let me introduce the interesting question considered here by William Lycan: does smell represent? When we smell, do we smell something? There is a range of possible answers. We might say that when I smell, I smell sausages (for example). Or that I smell a smell (which happens to be the smell of sausages). Or I might say I just have a smell experience: I may know that it’s associated with sausage smells and hence with sausages, but in itself it’s just an experience.

Lycan (who believes that we smell a gaseous miasma) notes two arguments for something like the last position – that smell doesn’t represent anything. First, introspection tells us nothing about what a smell represents. If I were a member of a culture that did not make sausages or eat meat, and had never experienced them, my first nose-full of sausage odour would convey nothing to me beyond itself. It’s different for sight: we inherently see things, and when we see our first sausage there can be no doubt we are seeing a thing, even if we do not yet know much about its nature: it would be absurd to maintain we were merely having a visual experience.

The second argument is that smells can’t really be wrong: there are no smell illusions. If a car is sprayed with “new car” perfume to make us think that it is fresh off the production line, we may make a mistake about that inference, but our nose was not wrong about the smell, which was real. But representations can always be wrong, so if we can’t be wrong, there is no representation.

Lycan is unimpressed by introspective evidence: the mere fact that philosophers disagree about what it tells us is enough, he feels, to discredit it. The second argument fails because it assumes that if smells represent, they must represent their causes: but they might just represent something in the air. On getting a whiff of my first sausage I would not know what it was, but I might well be moved to say “What’s that appetising (or disgusting) smell?”  I wouldn’t simply say “Golly, I am undergoing a novel olfactory experience for some opaque reason.”  I think in fact we could go further there and argue that I might well say “What’s that I can smell?” – but that doesn’t suit Lycan’s preferred position. (My daughter intervenes to say “What about ‘acrid’?”)

Lycan summarises a range of arguments (One is an argument by Richardson that smell is phenomenologically “exteroceptive”, inherently about things out there: Lycan endorses this view, but surely relying on phenomenology is smuggling back in the introspection he was so scathing about when the other side invoked it?). His own main argument rests on the view that how something smells is something over and above all the other facts about it. The premise here is very like that in the famous thought experiment of Mary the colour scientist, though Lycan is not drawing the same conclusions at all. He claims instead that:

I can know the complex of osphresiological fact without knowing how the rose smells because knowing is knowing-under-a-representation… that solution entails that olfactory experience involves representation.

That does make some sense, I feel (What about “osphresiological”! we’re really working on the vocabulary today, aren’t we?). You may be asking yourself, however, whether this is a question that needs a single answer. Couldn’t we say, yes sometimes smells represent miasmas, but they can also represent sausages; or indeed they can represent nothing.

Lycan, in what I take to be a development of his view, is receptive to the idea of layering: that in fact smells can represent not just a cloud of stuff in the air, but also the thing from which they emanated. That being so I am not completely clear why we should give primacy to the miasma. Two contrary cases suggest themselves. First, suppose there is a odour so faint I don’t even perceive it as such consciously, but have a misty sense of salsiccian (alright, I made it up) presence which makes me begin to think about how agreeable a nice Cumberland sausage for lunch might be. Wouldn’t we say that in some sense the smell represented sausages to me: but we can’t say it represented a miasma because no such thing ever entered my mind?

Second, if we accept layering we might say that the key point is about the essential or the minimal case: we can smell without that smell representing a sausage, but what’s the least it can represent and still be a smell? Can it represent nothing? Suppose I dream and have an odd, unrecognisable experience. Later on, when awake, I encounter a Thai curd sausage for the first time and find that the experience I had was in fact an olfactory one, the smell of this particular kind of comestible. My dream experience cannot possibly have represented a sausage, a miasma, a smell, or anything but itself because I didn’t know what it was: but, it turns out, it was the smell of curd sausage.

I think your reaction to that is likely to depend on whether you think an experience could be a smell experience without being recognisable as such; if not, you may be inclined to agree with Lycan, who would probably reiterate his view that smells are sensing-under-a-representation. That view entails that there is an ineffability about smell, and Lycan suggests this might help account for the poverty of smell vocabulary that I noted above. Interestingly it turns out that this very point has been attacked by Majid and Burenhult, albeit not in a way that Lycan considers fatal to his case. Majid and Burenhult studied the Jahai, a nomadic hunter-gatherer tribe on the Malaysian peninsula, and found that they have a very rich lexicon of odour terms, such as a word for “the smell of petrol, smoke and bat droppings” (what, all of them?). It’s just us English speakers, it seems, who are stuck with acrid nidors.

Turing surprisedSo, was the brouhaha over the Turing Test  justified? It was widely reported last week that the test had been passed for the first time by a chatbot named ‘Eugene Goostman’.  I think the name itself is a little joke: it sort of means ‘well-made ghost man’.

This particular version of the test was not the regular Loebner which we have discussed before (Hugh Loebner must be grinding his teeth in frustration at the apprent ease with which Warwick garnered international media attention), but a session at the Royal Society apparently organised by Kevin Warwick.  The bar was set unusually low in this case: to succeed the chatbot only had to convince 30% of the judges that it was human. This was based on the key sentence in the paper by Turing which started the whole thing:

I believe that in about fifty years’ time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.

Fair enough, perhaps, but less impressive than the 50% demanded by other versions; and if 30% is the benchmark, this wasn’t actually the first ever pass, because other chatbots like Cleverbot have scored higher in the past. Goostman doesn’t seem to be all that special: in an “interview” on BBC radio, with only the softest of questioning, it used one response twice over, word for word.

The softness of the questioning does seem to be a crucial variable in Turing tests. If the judges stick to standard small talk and allow the chatbot to take the initiative, quite a reasonable dialogue may result: if the judges are tough it is easy to force the success rate down to zero by various tricks and traps.  Iph u spel orl ur werds rong, fr egsampl, uh bott kanot kope, but a human generally manages fine.

Perhaps that wouldn’t have worked for Goostman, though because he was presented as relatively ignorant young boy whose first language was not English, giving him some excuse for not understanding things. This stratagem attracted some criticism, but really it is of a piece with chatbot strategy in general; faking and gaming is what it’s all about. N0-one remotely supposes that Goostman, or Cleverbot, or any of the others, has actually attained consciousness, or is doing anything that could properly be called thinking. Many years ago I believe there were serious efforts to write programs that to some degree imitated the probable mental processes of a human being: they identified a topic, accessed a database of information about it, retained a set of ‘attitudes’ towards things and tried to construct utterances that made sense in relation to them.  It is a weakness of the Turing test that it does not reward that kind of effort; a robot with poor grammar and general knowledge might be readily detectable even though it gave signs of some nascent understanding, while a bot which generates smooth responses without any attempt at understanding has a much better chance of passing.

So perhaps the curtain should be drawn down on the test; not because it has been passed, but because it’s no use.

gerald edelmanGerald Edelman has died, at the age of 84. He won his Nobel prize for work on the immune system, but we’ll remember him as the author of the Theory of Neuronal Group Selection (TNGS) or ‘Neural Darwinism’.

Edelman was prominent among those who emphasise the limits of computation: he denied that the brain was a computer and did not believe computers could ever become conscious…

In considering the brain as a Turing machine, we must confront the unsettling observations that, for a  brain, the proposed table of states and state transitions is unknown, the symbols on the input tape are ambiguous and have no preassigned meanings, and the transition rules, whatever they may be, are not consistently applied. Moreover inputs and outputs are not specified by a teacher or a programmer in real-world animals. It would appear that little or noting of value can be gained from the application of this failed analogy between the computer and the brain.

He was not averse to machines in general, however, and was happy to use robots for parts of his own research. He drew a distinction between perception, first-order consciousness, and higher-order consciousness; the first could be attained by machines we could build now; the second might very well be possible for machines of the right kind eventually – but there was much to be done before we could think of trying it. Even higher-order consciousness might be attainable by an artefactual machine in principle, but the prospect was so remote it was pointless to spend any time thinking about it.

There may seem to be a slight tension here: Turing machines are ruled out, but machines of another kind are ruled in. Yet the whole point of a Universal Turing Machine is that it can do anything that any machine can do?

For Edelman the point was that the brain required biological thinking, not just concepts from physics or engineering. In particular he advocated selective mechanisms like those in Darwinian evolution. Instead of running an algorithm, the brain offered up a vast range of neuronal arrays, some of which were reinforced and so survived to exert more influence subsequently. The analogy with Darwinian evolution is not precise, and Francis Crick famously said the whole thing could better be called ‘Neural Edelmanism’ (no-one so bitchy as a couple of Nobel prize-winners).

Edelman was in fact drawing on a different analogy, one with the immune system he understood so well. The human immune system has to react quickly to invading infections, synthesising antibodies to new molecules it has never encountered before; in fact it reacts just as effectively to artificial molecules synthesised in the lab, ones that never existed in nature. For a long time it was believed that the system somehow took an impression of the invaders’ chemistry and reproduced it; in fact what it does is develop a vast repertoire of variant molecules; when one of them happens to lock into an invader it then reproduces vigorously and produces more of itself to lock into other similar molecules.

This looks like a useful concept and I think Edelman was right to think it has a role to play in the brain: but working out quite how is another matter. Edelman himself built a novel idea of recategorisation based on the action of re-entrant loops; this part of the theory has not fared very well over the years. The NYT obituary quotes Gunther Stent who once said that as professor of molecular biology and chairman of the neurobiology section of the National Academy of Sciences, he should have understood Edelman’s theory – but didn’t.

At any rate, we can see that Edelman believed that when a conscious machine was built in the distant future it would be running a selective system of some kind; one that we could well call a machine in everyday terms, though not in the Turing sense. He just might be vindicated one day.

 

self deceptionJoseph T Hallinan’s new book Kidding Ourselves says that not only is self deception more common and more powerful than we suppose, it’s actually helpful: deluded egoists beat realists every time.

Philosophically, of course, self-deception is impossible. To deceive yourself you have to induce in yourself a belief in a proposition you know to be false. In other words you have to believe and disbelieve the same thing, which is contradictory. In practice self-deception of a looser kind is possible if there is some kind of separation between the deceiving and deceived self. So for example there might be separation over time: we set up a belief for ourselves which is based on certain conditions; later on we retain the belief but have forgotten the conditions that applied. Or the separation might be between conscious and unconscious, with our unrecognised biases and preferences causing us to believe things which we could not accept if we were to subject them to a full and rational examination. As another example, we might well call it self deception if we choose to behave as if we believed something which in fact we don’t believe.

Hallinan’s examples are a bit of a mixed bag, and many of them seem to be simple delusions rather than self-delusions. He recounts, for example the strange incident in 1944 when many of the citizens of a small town in Illinois began to believe they were being attacked by a man using gas – one that most probably never existed at all. It’s a peculiar case that certainly tells us something about human suggestibility, but apparently nothing about self-deception; there’s no reason to think these people knew all along that the gas man was a figment of their imaginations.

More trickily, he also tells a strange story about Stephen Jay Gould.  A nineteenth century researcher called Morton claimed he had found differences in cranial capacity between ethnic groups.  Gould looked again at the data and concluded that the results had been fudged: but he felt it was clear they had not been deliberately fudged. Morton had allowed his own prejudices to influence his interpretation of the data. So far so good; the strange sequel is that after Gould’s death a more searching examination which re-examined the original skulls measured by Morton found that there was nothing much wrong with his data. If anything, they concluded, it was Gould who had allowed prior expectations to colour his interpretation. A strange episode but at the end of the day it’s not completely clear to me that anyone deceived themselves. Gould, or so it seems, got it wrong, but was it really because of his prejudices or was that just a little twist the new researchers couldn’t resist throwing in?

Hallinan examines the well-established phenomenon of the placebo, a medicine which has no direct clinical effect but makes people better by the power of suggestion. He traces it back to Mesmer and beyond. Now of course people taking pink medicine don’t usually deceive themselves – they normally believe it is real medicine – otherwise it wouldn’t work? The really curious thing is that even in trials where patients were told they were getting a placebo, it still had a significant beneficial effect! What was the state of mind of these people? They did not believe it was real medicine, so they should not have believed it worked. But they knew that placebos worked, so they believed that if they believed in it it would have an effect; and somehow they performed the mental gymnastics needed to achieve some state of belief..?

Hallinan’s main point, though is the claim that unjustified optimism actually leads to better health and greater success; in sports, in business, wherever. In particular, people who blame themselves for failure do less well than those who blame factors outside their control. He quotes many studies, but there are in my view some issues about untangling the causality. It seems possible that in a lot of cases there were underlying causal factors which explain the correlation of doubt and failure.

Take insurance salesmen: apparently those who were most optimistic and self-exculpatory in their reasoning not only sold more, they were less likely to give up. But let’s consider two imaginary salesmen. One looks and sounds like George Clooney. He goes down a storm on the doorstep and even when he doesn’t make a sale he gets friendly, encouraging reactions. Of course he’s optimistic, and of course he’s successful, but his optimism and his success are caused by his charm, they do not cause each other. His colleague Scarface has a problem on one cheek that drags his eye down and mouth up, giving him an odd expression and slightly distorting his speech. On the doorstep people just don’t react so well; unfairly they feel uneasy with him and want to curtail the conversation. Scarface is pessimistic and does badly, but it’s not his pessimism that is the underlying problem.

Hallinan includes sensible disclaimers about his conclusions – he’s not proposing we all start trying to delude ourselves – but I fear his thesis might play into a widespread tendency to believe that failure and ill-health are the result of a lack of determination and hence in some sense the sufferer’s own fault: it would in my view be a shame to reinforce that bias.

There are of course deeper issues here; some would argue that our misreading of ourselves goes far beyond over-rating our sales skills: that systematic misreading of limited data is what causes us to think we have a conscious self in the first place…

structureKristjan Loorits says he has a solution to the Hard Problem, and it’s all about structure.

His framing of the problem is that it’s about the incompatibility of three plausible theses:

  1. all the objects of physics and other natural sciences can be fully analyzed in terms of structure and relations, or simply, in structural terms.
  2. consciousness is (or has) something over and above its structure and relations.
  3. the existence and nature of consciousness can be explained in terms of natural sciences.

At first sight it may look a bit odd to make structure so central. In effect Loorits claims that the distinguishing character of entities within science is structure, while qualia are monadic - single, unanalysable, unconnected. He says that he cannot think of anything within physics that lacks structure in this way – and if anyone could come up with such a thing it would surely be regarded as another item in the peculiar world of qualia rather than something within ordinary physics.

Loorits approach has the merit of keeping things at the most general level possible, so that it works for any future perfected science as well as the unfinished version we know at the moment. I’m not sure he is right to see qualia as necessarily monadic, though. One of th best known arguments for the existence of qualia is the inverted spectrum. If all the colours were swapped for their opposites within one person’s brain – green for red, and so on – how could we ever tell? The swappee would still refer to the sky as blue, in spite of experiencing what the rest of us would call orange. Yet we cannot – can we? – say that there is no difference between the experience of blue and the experience of orange.

Now when people make that argument, going right back to Locke, they normally chose inversion because that preserves all the relationships between colours.  Adding or subtracting colours produce results which are inverted for the swappee, but consistently. There is a feeling that the argument would not work if we merely took out cerulean from the spectrum and put in puce instead, because then the spectrum would look odd to the swappee.  We most certainly could not remove the quale of green and replace it with the quale of cherry flavour or the quale of distant trumpets; such substitutions would be obvious and worrying (or so people seem to think). If that’s all true then it seems qualia do have structural relationships: they sort of borrow those of their objective counterparts.  Quite how or why that should be is an interesting issue in itself, but at any rate it looks doubtful whether we can safely claim that qualia are monadic.

Nevertheless, I think Loorits’ set-up is basically reasonable: in a way he is echoing the view that mental content lacks physical location and extension, an opinion that goes back to Descartes and was more recently presented in a slightly different form by McGinn.

For his actual theory he rests on the views of Crick and Koch, though he is not necessarily committed to them. The mysterious privacy of qualia, in his view, amounts to our having information about our mental states which we cannot communicate. When we see a red rose, the experience is constituted by the activity of a bunch of neurons. But in addition, a lot of other connected neurons raise their level of activity: not enough to pass the threshold for entering into consciousness, but enough to have some effect. It is this penumbra of subliminal neural activity that constitutes the inexpressible qualia. Since this activity is below the level of consciousness it cannot be reported and has no explicit causal effects on our behaviour; but it can affect our attitudes and emotions in less visible ways.

It therefore turns out that qualia re indeed not monadic after all; they do have structure and relations, just not ones that are visible to us.

Interestingly, Loorits goes on to propose an empirical test. He mentions an example quoted by Dennett: a chord on the guitar sound like a single thing, but when we hear the three notes played separately first, we become able to ‘hear’ them separately within the chord. On Loorits’ view, part of what happens here is that hearing the notes separately boosts some of the neuronal activity which was originally subliminal so that we become aware of it: when we go back to the chord we’re now aware of a little more information about why it sounds as it does, and the qualic mystery of the original chord is actually slightly diminished.

Couldn’t there be a future machine that elucidated qualia in this way but more effectively, asks Loorits?  Such a machine would scan our brain while we were looking at the rose and note the groups of neurons whose activity increased only to subliminal levels. Then it could directly stimulate each of these areas to tip them over the limit into consciousness. For us the invisible experiences that made up our red quale would be played back into our consciousness, and when we had been through them we should finally understand why the red quale was what it was: we should know what seeing red was like and be able for the first time to describe it effectively.

Fascinating idea, but I can’t imagine what it would be like; and there’s the rub, perhaps. I think a true qualophile would say, yes, all very well, but once we’ve got your complete understanding of the red experience, there’s still going to be something over and above it all; the qualia will still somehow escape.

The truth is that Loorits’ theory is not really an explanation of qualia: it’s a sceptical explanation of why we think we have qualia. This becomes clear, if it wasn’t already, when he reviews the philosophical arguments: he doesn’t, for example, think philosophical zombies, people exactly like us but without qualia, are actually possible.

That is a perfectly respectable point of view, with a great deal to be said for it. If we are sceptics,  Loorits’ theory provides an exceptionally clear and sensible underpinning for our disbelief; it might even turn out to be testable. But I don’t think it will end the argument.

 

bot Botprize is a version of the Turing Test for in-game AIs: they don’t have to talk, just run around playing Unreal Tournament (a first-person shooter game) in a way that convinces other players that they are human. In the current version players use a gun to tag their opponents as bots or humans; the bots, of course, do the same.

The contest initially ran from 2008 up to 2012; in the last year, two of the bots exceeded the 50% benchmark of humanness. The absence of a 2013 contest might have suggested that that had wrapped things up for good: but now the 2014 contest is under way: it’s not too late to enter if you can get your bot sorted by 12 May. This time there will be two methods of judging; one called ‘first person’ (rather confusingly – that sounds as if participants will ask themselves: am I a bot?) is the usual in-game judging; the other (third person) will be a ‘crowd-sourced’ judgement based on people viewing selected videos after the event.

How does such a contest compare with the original Turing Test, a version of which is run every year as the Loebner Prize? The removal of any need to talk seems to make the test easier. Judges cannot use questions to test the bots’ memory (at least not in any detail), general knowledge, or ability to carry the thread of a conversation and follow unpredictable linkages of the kind human beings are so good at. They cannot set traps for the bots by making quirky demands (‘please reverse the order of the letters in each word when you respond’) or looking for a sense of humour.

In practice a significant part of the challenge is simply making a bot that plays the game at an approximately human level. This means the bot must never get irretrievably stuck in a corner or attempt to walk through walls; but also, it must not be too good – not a perfect shot that never misses and is inhumanly quick on the draw, for example. This kind of thing is really not different from the challenges faced by every game designer, and indeed the original bots supplied with the game don’t perform all that badly as human imitators, though they’re not generally as convincing as the contestants.

The way to win is apparently to build in typical or even exaggerated human traits. One example is that when a human player is shot at, they tend to go after the player that attacked them, even when a cool appraisal of the circumstances suggests that they’d do better to let it go. It’s interesting to reflect that if humans reliably seek revenge in this way, that tendency probably had survival value in the real world when the human brain was evolving; there must be important respects in which the game theory of the real world diverges from that of the game.

Because Botprize is in some respects less demanding than the original Turing Test, the conviction it delivers is less; the 2012 wins did not really make us believe that the relevant bots had human thinking ability, still less that they were conscious. In that respect a proper conversation carries more weight. The best chat-bots in the Loebner, however, are not at all convincing either, partly for a different reason – we know that no attempt has been made to endow them with real understanding or real thought; they are just machines designed to pass the test by faking thoughtful responses.

Ironically some of the less successful Botprize entrants have been more ambitious. In particular, Neurobot, created by Zafeiros Fountas as an MSc project, used a spiking neural network with a Global Workspace architecture; while not remotely on the scale of a human brain, this is in outline a plausible design for human-style cognition; indeed, one of the best we’ve got (which may not be saying all that much, of course). The Global Workspace idea, originated by Bernard Baars, situates consciousness as a general purpose space where inputs from different modules can be brought together and handled effectively. Although I have my reservations about that concept, it could at least reasonably be claimed that Neurobot’s functional states were somewhere on a spectrum which ultimately includes proper consciousness (interestingly, they would presumably be cognitive states of a kind which have never existed in nature, far simpler than those of most animals yet in some respects more like states of a human brain).

The 2012 winners by contraast, like the most successful Loebner chat-bots, relied on replaying recorded sequences of real human behaviour. Alas, this seems in practice to be the Achilles heel of Turing-style tests; canned responses just work too well.