There’s an interesting video discussion here at the Institute of Art and Ideas, between Margaret Boden, Steven Rose and Barry Smith, on Neuroscience versus Philosophy. I’ve never found neuroscientists that belligerent myself; it seems to be mainly other people who make exaggerated claims on behalf of thir subject (although talking up a particular bit of research is not unknown)
Following on somewhat from the idea of there being a quale of being me, the latest JCS includes a paper by Marc Slors and Fleur Jongepier about Mineness without Minimal Selves.
‘Mineness’ here is the quality of our experiences that makes them feel like ours, their first-person givenness. Slors and Jongepier say that the majority of theories explain this in terms of how the experience relates to a minimal self; although different terminology is used all these theories have in common that they rely on ‘internal’ structure, whereas Slors and Jongepier want instead to advocate a view based on external structure.
What does that all mean? The typical theory – they use Dan Zahavi as a representative case – says that there are three elements; the object experienced, the experiencing, and the subject who experiences. Some have argued that there can’t be experience without an experiencer, but we have to remember that a figure as august as Hume held that there was no subject apart from the stream of experience, no core ‘me’, or at least not one that he could perceive in himself. Now although the subject is indeed not part of the experience per se, it is experientially linked with it in this structure, and that’s why it has the feel of belonging to me. In a way this comes down to the commonsensical claim that experiences feel like mine because they relate to me; not surprising that that should be a popular point of view.
That structure, however, takes no account of time: it is, as it were, an instant view: Slors and Jongepier don’t think this will do. They quote Metzinger saying that he experiences his leg as having always been part of him, and his experiences as part of a stream of consciousness. They hold that this diachronic aspect of experience cannot be left out. Moreover, while they grant that some version of the internal structure described above could be bodged up to allow for continuous experience, it could not easily take account of more distant memories, which they hold to be equally important.
I’m not sure I see this. We’ve talked about unfortunate patients who have no ability to form new long or medium term memories: they exist in a kind of small temporal island, never able to remember how they got where they are and hypothesising that they regained consciousness only a few minutes ago. These people are nevertheless perfectly lucid and articulate and apart form the absence of memory seem to be having unimparied experiences which seem to be thier own just as much as anyone else’s do. Slors and Jongepier would probably point out that they retain memories from their earlier lives, before their brains were damaged: but if we hypothesise a person with no memories would we also deny them any sense of owning their experiences? I don’t really see why.
Anyway, Slors and Jongepier propose a coherentist theory which does not merely say that experience has to fit into a larger ‘psychobiography’ and dispense with the minimal self. The final, curious element in the theory is the claim that this essential coherence of experience with a background biography is not itself an object of experience. Indeed, it’s the fact that the coherence is not experienced that makes the experience feel like mine.
This seems odd at first sight: how can the absence of an experience of coherence make an experience feel like my own? Putting it informally I think the gist is that it is, as it were, the absence of surprise that lets us know things are familiar. Experiences seem like mine because they slide into the stream of consciousness without a splash.
It is an ingenious theory which seems to capture some aspects of phenomenology rather well; but in the end I don’t feel motivated to adopt it: it isn’t really solving any problems for me. I’m inclined to think that all direct experience seems like mine just because it is direct; my experiences are, as it were, right there, while the external world (and even more so someone else’s experiences) are matters of conjecture and inference. I suppose that means I’m hanging on to my minimal self for the moment.
What follows is a draft passage which might eventually form part of a longer piece: I’d appreciate any feedback. – Peter
Let’s ask a stupid question that may not even be answerable. How many qualia are there? It is generally assumed, I think, that this is like asking how long is a piece of string: that there is an indefinite multiplicity of qualia, that in fact, for every distinguishable sensation there is a matching distinct quale.
As we know, colour is always to the fore in these discussions, and the most common basic example of a quale is probably the colour quale we experience when we see a red rose. I think it is uncontroversial that all sensory experiences come with qualia (uncontroversial among those who believe in qualia at all, that is), although the basis for that appears to be purely empirical; I’m not aware of any arguments to show that all categories of sensory experience must necessarily come with qualia. It would be interesting and perhaps enlightening if some explorers of the phenomenal world reported that, say, the taste of pure water had no accompanying qualia – or that for some, slightly zombish people it had none, while for others it had the full complement of definite phenomenal qualities. To date that has not happened (and perhaps it can’t happen?); it seems to be universally agreed that if qualia exist at all, they accompany every sensory experience.
I think it is generally believed that feelings, phenomenal states with no direct relation to details of the external world, have qualia too. Pain qualia are often discussed, with feelings of hunger and pleasure getting occasional mentions; qualia of emotions are also mentioned without provoking controversy. It seems in fact that all experience is generally taken to have accompanying qualia, including dream or hallucinatory experience, and perhaps even certain memories.
In fact there seems to be an interesting, debatable borderline in memory. Vividly recalling a piece of music in real time seems, I would say, to have the same qualia as hearing it live through the ears (Or are the qualia of memories fainter? Do qualia, as a matter of fact, vary in intensity? Or is that idea a kind of contamination from the effable experiences that pair with each quale? It could be so, but then if there is no variation in intensity qualia must be sort of binary, fully on at all times – or fully off – and that doesn’t feel quite right either.) In general the same might be claimed for all those memories that involve some ‘replay’ of experience or feelings; the replay has qualia. Where nothing is held before our attention, on the other hand, there’s nothing. The act of merely summoning up a PIN number as we use it does not have its own qualia; there’s nothing it is like to recall a password, though there might be something it is like to search the memory for one, and something unpleasant it is like to panic when we fail.
There is certainly room for some phenomenological exploration around these areas, but that more or less exhausts the domain of qualia as I understand it to be generally recognised. I think, however, that it actually stretches a little further than that. There is, in my view, something it is like to be me, something properly ineffable and separable from all the particular sensations and feelings that being me entails. If this is indeed a quale (and of course since this is an ineffable matter I can only appeal to the reader’s own introspective research) then I think it’s in a category of its own. We might be tempted to assimilate it to the feelings, and say it’s the feeling of existing. Or perhaps we might think it’s simply the quale that goes with proprioception, the complex but essential sense that tells us where our body is at any moment. Those are respectable qualia no doubt, but I believe there’s a quale of being me that goes beyond them.
To that we can add a related and problematic entity which uniquely links the Hard and Easy problems, a phenomenal state we could call the executive quale, that of being in charge. We feel that consciousness is effective, that our conscious decisions have real heft in respect of our behaviour.
This, I think, is the very thing that many people are concerned to deny: the feeling of being causally effective; but to date I don’t think it has been regarded as a quale. For some people, who wish to deny both real agency and real subjectivity, the conjunction will seem logical and appealing – to others perhaps less so…
An intriguing paper from Benjamin D. Young claims that we can have phenomenal experiences of which we are unaware – although experiences of which we are aware always have phenomenal content. The paper is about smell, though I don’t really see why similar considerations shouldn’t apply to other senses.
At first sight the idea of phenomenal experience of which we are unaware seems like a contradiction in terms. Phenomenal experience is the subjective aspect of consciousness, isn’t it? How could an aspect of consciousness exist without consciousness itself? Young rightly says that it is well established that things we only register subconsciously can affect our behaviour – but that can’t include the sort of experience which for some people is the real essence of consciousness, can it?
The only way I can imagine subjectivity going on in my head without me experiencing it is if someone else were experiencing it – not a matter of me experiencing things subconsciously, but of my subconscious being a real separate entity, or perhaps of it all going on in the mind of alternate personality of the kind that seems to occur is Dissociative Identity Disorder (Multiple Personality, as it used to be called).
On further reflection, I don’t think that’s the kind of thing Young meant at all: I think instead he is drawing a distinction between explicit and inexplicit awareness. So his point is that I can experience qualia without having any accompanying conscious thought about those qualia or the experience.
That’s true and an important point. One reason qualia seem so slippery, I think, is that discussion is always in second order terms: we exchange reports of qualia. But because the things themselves are irredeemably first order they have a way of disappearing from the discussion, leaving us talking about their effable accompaniments.
Ironically, something like that may have happened in Young’s paper, as he goes on to discuss experiments which allegedly shed light on subjective experience. Smell is a complex phenomenon of course; compared with the neat structure of colours the rambling and apparently inexhaustible structure of smell space is daunting;y hard to grasp. However, smell conveniently has valence in a way that colours don’t: some smells are nice and some are nasty. Humans apparently vary their sniff rate partly in response to a smell’s valence and Young thinks that this provides an objective, measurable way into the subjectivity of the experience.
Beyond that he goes on to consider mating choice: it seems human beings, like other mammals, choose their mates partly on the basis of smell. I imagine this might be controversial to some, and some of the research Young quotes sounds amusingly naive. In answer to a questionnaire, female subjects rated body odour as an important factor in selecting a sexual partner; well yes, if a guy smells you’re maybe not going to date him, huh?
I haven’t read the study which was doubtless on a much more sophisticated level, and Young cites a whole wealth of other interesting papers. The problem is that while this is all fascinating psychologically, none of it can properly bear on the philosophical issue because qualia, the ultimate bearers of subjectivity, are acausal and cannot affect our behaviour. This is shown clearly by the zombie twin argument: my zombie twin has no qualia but his behaviour is ex hypothesi the same as mine.
Still, the use of valence as a way in is interesting. The normal philosophical argument is that we have no way of telling whether my subjective red is your subjective green: but it’s hard to argue that m subjective nasty is your subjective nice (unless we also hypothesise that you seek out nasty experiences and avoid nice ones?).
Quentin Ruyant has written a thoughtful piece about quantum mechanics and philosophy of mind: in a nutshell he argues both that quantum theory may be relevant to the explanation of consciousness and that consciousness may be relevant to the interpretation of quantum theory.
Is quantum theory relevant to consciousness? Well. of course some people have said so, notably Sir Roger Penrose and Stuart Hameroff. I think Ruyant is right, though, that the majority of philosophers and probably the majority of physicists dismiss the idea that quantum theory might be needed to explain consciousness. People often suggest that the combination of the two only appeals because both are hard to explain: ‘here’s one mystery and here’s another: maybe one explains the other’. Besides, people say, the brain is far too big and hot and messy for anything other than classical physics to be required
In making the case for the relevance of quantum theory, Ruyant relies on the Hard Problem. His position is that the Hard Problem is not biological but a matter of physics, whereas the Easy Problem, to do with all the scientifically tractable aspects of consciousness, can be dealt with by biology or psychology.
Actually, turning aside from the main thread of Ruyant’s argument, there are some reasons to suggest that quantum physics is relevant to the Easy Problem. Penrose’s case, in fact, seems to suggest just that: in his view consciousness is demonstrably non-computable and some kind of novel quantum mechanics is his favoured candidate to fill the gap. Penrose’s examples, things like solving mathematical problems, look like ‘Easy’ Problem matters to me.
Although I don’t think anyone (including me) advocates the idea, it also seems possible to argue that the ‘spooky action at a distance’ associated with quantum entanglement might conceivably have something to tell us about intentionality and its remarkable power to address things that are remote and not directly connected with us.
Anyway, Ruyant is mainly concerned with the Hard Problem, and his argument is that metaphysics and physics are closely related. Topics like the essential nature of physical things straddle the borderline between the two subjects, and it is not at all implausible therefore that the deep physics of quantum mechanics might shed light on the deep metaphysics of phenomenal experience. It seems to me a weakish line of argument, possibly tinged with a bit of prejudice: some physicists are inclined to feel that while their subject deals with the great fundamentals, biology deals only with the chance details of life; sort of a more intellectual kind of butterfly collecting. That kind of thinking is not really well founded, and it seems particularly odd to think that biology is irrelevant when considering a phenomenon that, so far as we know, appears only in animals and is definitely linked very strongly with the operation of the brain. John Searle for one argues that ‘Hard Problem’ consciousness arises from natural biological properties of brain tissue. We don’t yet know what those properties are, but in his view it’s absurd to think that the job of nerves could equally well be performed by beer cans and string. Ruth Millikan, somewhat differently, has argued that consciousness is purely biological in nature, arising from and defined by evolutionary needs.
I think the truth is that it’s difficult to get anywhere at this meta-theoretical level: we don’t really decide what kind of theory is most likely to be right and then concentrate on that area; we decide what the true theory most likely is and then root for the kind of theory it happens to be. That, to a great extent, is why quantum theories are not very popular: no-one has come up with a particular one that is cogent and appealing. It seems to me that Ruyant likes the idea of physics-based theories because he favours panpsychism, or panphenomenalism, and so is inclined to think that the essential nature of matter is likely to be the right place to look for a theory.
To be honest, though, I doubt whether any kind of science can touch the Hard Problem. It’s about entities that have no causal properties and are ineffable: how could empirical science ever deal with that? It might well be that a scientist will eventually give us the answer, but if so it won’t be by doing science, because neither classical nor quantum physics can really touch the inexpressible.
Actually, though there is a long shot. If Colin McGinn is partly on the right track, it may be that consciousness seems mysterious to us simply because we’re not looking at it the right way: our minds won’t conceptualise it correctly. Now the same could be true of quantum theory. We struggle with the interpretation of quantum mechanics, but what if we could reorient our brains so that it simply seemed natural, and we groped instead for an acceptable ‘interpretation’ of spooky classical physics? If we could make such a transformation in our mental orientation, then perhaps consciousness would make sense too? It’s possible, but we’re back to banging two mysteries together in the hope that some spark will be generated.
Ruyant’s general case, that metaphysicians should be informed by our best physics is hard to argue with. At the moment few philosophers really engage with the physics and few physicists really grasp the philosophy. Why do philosophers avoid quantum physics? Partly, no doubt, just because it’s difficult, and relies on mathematics which few philosophers can handle. Partly also, I think there’s an unspoken fear that in learning about quantum physics your intuitions will be trained into accepting a particular weltanschauung that might not be helpful. Connected with that is the fear that quantum physics isn’t really finished or definitive. Where would I be if I came up with a metaphysical system that perfectly supported quantum theory and then a few years later it turns out that I should have been thinking in terms of string theory? Metaphysicians cross their fingers and hope they can deal with the key issues at a level of generality that means they won’t be rudely contradicted by an unexpected advance in physics a few years later.
I suppose what we really need is someone who can come up with a really good specific theory that shows the value of metaphysics informed by physics, but few people are qualified to produce one. I must say that Ruyant seems to be an exception, with an excellent grasp of the theories on both sides of the divide. Perhaps he has a theory of consciousness in his back pocket…?
A few years ago we noted the remarkable research by Fried, Mukamel, and Kreiman which reproduced and confirmed Libet’s famous research. Libet, in brief, had found good evidence using EEG that a decision to move was formed about half a second before the subject in question became consciously aware of it; Fried et al produced comparable results by direct measurement of neuron firing.
In the intervening years, electrode technology has improved and should now make it possible to measure multiple sites. The scanty details here indicate that Kreiman, with support from MIT, plans to repeat the research in an enhanced form; in particular he proposes to see whether, having identified the formed intention to move, it is then possible to stop it before the action takes place. This resembles the faculty of ‘free won’t’ by which Libet himself hoped to preserve some trace of free will.
From the MIT article it is evident that Kreiman is a determinist and believes that his research confirms that position. It is generally believed that Libet’s findings are incompatible with free will in the sense that they seem to show that consciousness has no effect on our actual behaviour.
That actually sheds an interesting side-light on our view of what free will is. A decision to move still gets made, after all; why shouldn’t it be freely made even though it is unconscious? There’s something unsatisfactory about unconscious free will, it seems. Our desire for free will is a desire to be in control, and by that we mean a desire for the entity that does the talking to be in control. We don’t really think of the unconscious parts of our mind as being us; or at least not in the same way as that gabby part that claims responsibility for everything (the part of me that is writing this now, for example).
This is a bit odd, because the verbal part of our brain obviously does the verbals; it’s strange and unrealistic to think it should also make the decisions, isn’t it? Actually if we are careful to distinguish between the making of the decision and being aware of the decision – which we should certainly do, given that one is clearly a first order mental event and the other equally clearly second order – then it ceases to be surprising that the latter should lag behind the former a bit. Something has to have happened before we can be aware of it, after all.
Our unease about this perhaps relates to the intuitive conviction of our own unity. We want the decision and the awareness to be a single event, we want conscious acts to be, as it were, self- illuminating, and it seems to be that that the research ultimately denies us.
It is the case, of course, that the decisions made in the research are rather weird ones. We’re not often faced with the task of deciding to move our hands at an arbitrary time for no reason. Perhaps the process is different if we are deciding which stocks and shares to buy? We may think about the pros and cons explicitly, and we can see the process by which the conclusion is reached; it’s not plausible that those decisions are made unconsciously and then simply notified to consciousness, is it?
On the other hand, we don’t think, do we, that the process of share-picking is purely verbal? The words flowing through our consciousness are signals of a deeper imaginative modelling, aren’t they? If that is the case, then the words might still be lagging. Perhaps the distinction to be drawn is not really between conscious and unconscious, but between simply conscious and explicitly conscious. Perhaps we just shouldn’t let the talky bit pretend to be the whole of consciousness just because the rest is silent.
The European Human Brain Project seems to be running into problems. This Guardian report notes that an open letter of protest has been published by 170 unhappy neuroscientists. They are seeking to influence and extend a review that is due, hoping they can get a change of direction. I don’t know a great deal about the relevant EU bureaucracy, but I should think the letter-writers’ chances of success are small, not least because in Henry Markram they’re up against a project leader who is determined, resourceful, and not lacking support of his own. There’s a response to the letter here.
It is a little hard to work out exactly what the disagreement is about; the Guardian seems to smoosh together the current objections of former insiders with the criticisms of those who thought the project was radically premature in the first place. I find myself trying to work out what the protestors want, from Markram’s disparaging remarks about them, rather the way we have to reconstruct some ancient heresies from the rebuttals of the authorities, the only place where details survive.
We’re told the disagreement is between those who study behaviour at a high level and the project leaders who want to build simulations from the bottom up. In particular some cognitive neuroscience projects have been ‘demoted’ to partner status. People say the project has been turned into a technology one: Markram says it always was: he suggests that piling up more data is useless and that instead he’s doing an ICT project which will provide a platform for integrating the data, and that it’s all coming out of an ICT budget anyway.
Us naive outsiders had picked up the impression that the project had a single clear goal; a working simulation of a whole human brain. That is sort of still there, but reading the response it seems to be a pretty distant aspiration. Apparently a mouse brain is going to be done first, but even that is a way off; it’s all about the platforms. Earlier documents suggest there will actually be six platforms, only one of which is about brain simulation; the others are neuroinformatics, high performance computing, medical informatics, neuromorphic computing, and neurorobotics – fascinating subjects. The implicit suggestion is that this kind of science can’t be done properly just by working in labs and publishing papers, it requires advanced platforms in which research can be integrated. Really? Speaking as a professional bureaucrat myself, I have to say frankly that that sounds uncommonly like the high-grade bollocks emitted by a project leader who has more money than he knows what to do with. The EU in particular is all about establishing unwanted frameworks and common platforms which lie dead in drawers forever after. If people want to share findings, publishing papers is fine (alright, not flawless). If it’s about doing actual research, having all the projects captured by a common platform which might embody common errors and common weaknesses doesn’t sound like a good idea at all. My brain doesn’t know, but my gut says the platforms won’t be much use.
Let’s be honest, I don’t really know what’s going on, but if one were cynical one might suppose that the success of the Human Genome Project made the authorities open to other grand projects, and one on the brain hit the spot. The problem is that we knew what a map of the genome would be like, and we pretty much knew it could be done and how. We don’t have a similarly clear idea relating to the brain. However, the concept was appealing enough to attract a big pot of money, both in the EU and then in the US (an even bigger pot). The people who got control of these pots cannot deliver anything like the map of the human genome, but they can buy in the support of fund-hungry researchers by disbursing some of the gold while keeping the politicians and bureaucrats happy by wrapping everything in the afore-mentioned bollocks. The authors of the protest letter perhaps ought to be criticising the whole idea, but really they’re just upset about being left out. The deeper sceptics who always said the project was premature – though they may have thought they were talking about brain simulation, not a set of integrative platforms – were probably right; but there’s no money in that.
Grand projects like this are probably rarely the best way to control research funding, but they do get funding. Maybe something good somewhere will accidentally get the help it needs; meanwhile we’ll be getting some really great European platforms.
Doctors at George Washington found by chance recently that stimulating a patient’s claustrum served to disrupt consciousness temporarily (abstract). The patient was being treated for epilepsy, and during this kind of surgery it is normal to use an electrode to stimulate areas of the brain in the target area before surgery to determine their role and help ensure the least possible damage is done to important functions. The claustrum is a sheet-like structure which seems to be well connected to many parts of the brain; Crick and Koch suggested it might be ‘the conductor of the orchestra’ of consciousness.
New Scientist reported this as the discovery of the ‘on/off’ switch for consciousness; but that really doesn’t seem to be the claustrum’s function: there’s no reason at the moment to suppose it is involved in falling asleep, or anaesthesia, or other kinds of unconsciousness, The on/off idea seems more like a relatively desperate attempt to explain the discovery in layman’s terms, reminiscent of the all-purpose generic tabloid newspaper technology report in Michael Frayn’s The Tin Men:
British scientists have developed a “magic box”, it was learned last night. The new wonder device was tested behind locked doors after years of research. Results were said to have exceeded expectations… …The device is switched on and off with a switch which works on the same principle as an ordinary domestic light switch…
Actually, one of the most interesting things about the finding is that the state the patient entered did not resemble sleep or any of those other states; she did not collapse or close her eyes, but instantly stopped reading and became unresponsive – although if she had been asked to perform a repetitive task before stimulation started, she would continue for a few seconds before tailing off. On some occasions she uttered a few incoherent syllables unprompted. This does sound more novel and potentially more interesting than a mere on/off switch. She was unable to report what the experience was like as she had no memory of it afterwards – that squares with the idea that consciousness was entirely absent during stimulation, though it’s fair to note that part of her hippocampus, which has an important role in memory formation, had already been removed.
Could Crick and Koch now be vindicated? It seems likely in part: the claustrum seems at least to have some important role – but it’s not absolutely clear that it is a co-ordinating one. One of the long-running problems for consciousness has been the binding problem: how the different sensory inputs, processed and delivered at different speeds, somehow come together into a smoothly co-ordinated experience. It could be that the claustrum helps with this, though some further explanation would be needed. As a long shot, it might even be that the claustrum is part of the ‘Global Workspace’ of the mind hypothesised by Bernard Baars, an idea that is still regularly invoked and quoted.
But we must be cautious. All we really know is that stimulating the claustrum disrupted consciousness. That does not mean consciousness happens in the claustrum. If you blow up a major road junction near a car factory, production may cease, but it doesn’t mean that the junction was where the cars were manufactured. Looking at it sceptically we might note that since the claustrum is well connected it might provide an effective way of zapping several important areas at once, and it might be the function of one or more of these other areas that is essential to sustaining consciousness.
However, it is surely noteworthy that a new way of being unconscious should have been discovered. It seems an unprecedentedly pure way, with a very narrow focus on high level activity, and that does suggest that we’re close to key functions. It is ethically impossible to put electrodes in anyone’s claustrum for mere research reasons, so the study cannot be directly replicated or followed up; but perhaps the advance of technology will provide another way.
Smell is the most elusive of the senses. Sight is beautifully structured and amenable to analysis in terms of consistent geometry and a coherent domain of colours. Smells… how does one smell relate to another? There just seems to be an infinite number of smells, all one of a kind. We can be completely surprised by an unprecedented smell which is like nothing we ever experienced before, in a way we can’t possibly be surprised by a new colour (with some minor possible exceptions). Our olfactory system effortlessly assigns new unique smell experiences to substances that never existed until human beings synthesised them.
There don’t even seem to be any words for smells: or at least, the only way we can talk about them is by referring to “the smell of X”, as in a “smoky smell” or “the smell of lemons”. We don’t have to do that to describe shapes or colours: they can be described as “blue”, or “square” without our having to say they are “sky-coloured” or “the shape of a box”. (Except perhaps in the case of orange? Is “orange” short for ‘the colour of oranges’?) Even for taste we have words like “bitter” and “sweet”. The only one I can think of for smells is “nidorous’, which is pretty obscure – and in order to explain it I have to fall back on saying it describes the “smell of” burning/cooking meat. All we have to describe smells is “strong” and “faint” (my daughter, reading over my shoulder, says what about “pungent”? She does not consider “pungent” to be merely a synonym of “strong” – you may be indifferent to a strong smell, but not to a pungent one, she claims).
With that by way of preamble, let me introduce the interesting question considered here by William Lycan: does smell represent? When we smell, do we smell something? There is a range of possible answers. We might say that when I smell, I smell sausages (for example). Or that I smell a smell (which happens to be the smell of sausages). Or I might say I just have a smell experience: I may know that it’s associated with sausage smells and hence with sausages, but in itself it’s just an experience.
Lycan (who believes that we smell a gaseous miasma) notes two arguments for something like the last position – that smell doesn’t represent anything. First, introspection tells us nothing about what a smell represents. If I were a member of a culture that did not make sausages or eat meat, and had never experienced them, my first nose-full of sausage odour would convey nothing to me beyond itself. It’s different for sight: we inherently see things, and when we see our first sausage there can be no doubt we are seeing a thing, even if we do not yet know much about its nature: it would be absurd to maintain we were merely having a visual experience.
The second argument is that smells can’t really be wrong: there are no smell illusions. If a car is sprayed with “new car” perfume to make us think that it is fresh off the production line, we may make a mistake about that inference, but our nose was not wrong about the smell, which was real. But representations can always be wrong, so if we can’t be wrong, there is no representation.
Lycan is unimpressed by introspective evidence: the mere fact that philosophers disagree about what it tells us is enough, he feels, to discredit it. The second argument fails because it assumes that if smells represent, they must represent their causes: but they might just represent something in the air. On getting a whiff of my first sausage I would not know what it was, but I might well be moved to say “What’s that appetising (or disgusting) smell?” I wouldn’t simply say “Golly, I am undergoing a novel olfactory experience for some opaque reason.” I think in fact we could go further there and argue that I might well say “What’s that I can smell?” – but that doesn’t suit Lycan’s preferred position. (My daughter intervenes to say “What about ‘acrid’?”)
Lycan summarises a range of arguments (One is an argument by Richardson that smell is phenomenologically “exteroceptive”, inherently about things out there: Lycan endorses this view, but surely relying on phenomenology is smuggling back in the introspection he was so scathing about when the other side invoked it?). His own main argument rests on the view that how something smells is something over and above all the other facts about it. The premise here is very like that in the famous thought experiment of Mary the colour scientist, though Lycan is not drawing the same conclusions at all. He claims instead that:
I can know the complex of osphresiological fact without knowing how the rose smells because knowing is knowing-under-a-representation… that solution entails that olfactory experience involves representation.
That does make some sense, I feel (What about “osphresiological”! we’re really working on the vocabulary today, aren’t we?). You may be asking yourself, however, whether this is a question that needs a single answer. Couldn’t we say, yes sometimes smells represent miasmas, but they can also represent sausages; or indeed they can represent nothing.
Lycan, in what I take to be a development of his view, is receptive to the idea of layering: that in fact smells can represent not just a cloud of stuff in the air, but also the thing from which they emanated. That being so I am not completely clear why we should give primacy to the miasma. Two contrary cases suggest themselves. First, suppose there is a odour so faint I don’t even perceive it as such consciously, but have a misty sense of salsiccian (alright, I made it up) presence which makes me begin to think about how agreeable a nice Cumberland sausage for lunch might be. Wouldn’t we say that in some sense the smell represented sausages to me: but we can’t say it represented a miasma because no such thing ever entered my mind?
Second, if we accept layering we might say that the key point is about the essential or the minimal case: we can smell without that smell representing a sausage, but what’s the least it can represent and still be a smell? Can it represent nothing? Suppose I dream and have an odd, unrecognisable experience. Later on, when awake, I encounter a Thai curd sausage for the first time and find that the experience I had was in fact an olfactory one, the smell of this particular kind of comestible. My dream experience cannot possibly have represented a sausage, a miasma, a smell, or anything but itself because I didn’t know what it was: but, it turns out, it was the smell of curd sausage.
I think your reaction to that is likely to depend on whether you think an experience could be a smell experience without being recognisable as such; if not, you may be inclined to agree with Lycan, who would probably reiterate his view that smells are sensing-under-a-representation. That view entails that there is an ineffability about smell, and Lycan suggests this might help account for the poverty of smell vocabulary that I noted above. Interestingly it turns out that this very point has been attacked by Majid and Burenhult, albeit not in a way that Lycan considers fatal to his case. Majid and Burenhult studied the Jahai, a nomadic hunter-gatherer tribe on the Malaysian peninsula, and found that they have a very rich lexicon of odour terms, such as a word for “the smell of petrol, smoke and bat droppings” (what, all of them?). It’s just us English speakers, it seems, who are stuck with acrid nidors.
So, was the brouhaha over the Turing Test justified? It was widely reported last week that the test had been passed for the first time by a chatbot named ‘Eugene Goostman’. I think the name itself is a little joke: it sort of means ‘well-made ghost man’.
This particular version of the test was not the regular Loebner which we have discussed before (Hugh Loebner must be grinding his teeth in frustration at the apprent ease with which Warwick garnered international media attention), but a session at the Royal Society apparently organised by Kevin Warwick. The bar was set unusually low in this case: to succeed the chatbot only had to convince 30% of the judges that it was human. This was based on the key sentence in the paper by Turing which started the whole thing:
I believe that in about fifty years’ time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.
Fair enough, perhaps, but less impressive than the 50% demanded by other versions; and if 30% is the benchmark, this wasn’t actually the first ever pass, because other chatbots like Cleverbot have scored higher in the past. Goostman doesn’t seem to be all that special: in an “interview” on BBC radio, with only the softest of questioning, it used one response twice over, word for word.
The softness of the questioning does seem to be a crucial variable in Turing tests. If the judges stick to standard small talk and allow the chatbot to take the initiative, quite a reasonable dialogue may result: if the judges are tough it is easy to force the success rate down to zero by various tricks and traps. Iph u spel orl ur werds rong, fr egsampl, uh bott kanot kope, but a human generally manages fine.
Perhaps that wouldn’t have worked for Goostman, though because he was presented as relatively ignorant young boy whose first language was not English, giving him some excuse for not understanding things. This stratagem attracted some criticism, but really it is of a piece with chatbot strategy in general; faking and gaming is what it’s all about. N0-one remotely supposes that Goostman, or Cleverbot, or any of the others, has actually attained consciousness, or is doing anything that could properly be called thinking. Many years ago I believe there were serious efforts to write programs that to some degree imitated the probable mental processes of a human being: they identified a topic, accessed a database of information about it, retained a set of ‘attitudes’ towards things and tried to construct utterances that made sense in relation to them. It is a weakness of the Turing test that it does not reward that kind of effort; a robot with poor grammar and general knowledge might be readily detectable even though it gave signs of some nascent understanding, while a bot which generates smooth responses without any attempt at understanding has a much better chance of passing.
So perhaps the curtain should be drawn down on the test; not because it has been passed, but because it’s no use.