measureThere were reports recently of a study which tested different methods for telling whether a paralysed patient retained some consciousness. In essence, PET scans seemed to be the best, better than fMRI or traditional, less technically advanced tests. PET scans could also pick out some patients who were not conscious now, but had a good chance of returning to consciousness later; though it has to be said a 74% success rate is not that comforting when it comes to questions of life and death.

In recent years doctors have attempted to diagnose a persistent vegetative state in unresponsive patients, a state i which a patient would remain alive indefinitely (with life support) but never resume consciousness; there seems to be room for doubt, though about whether this is really a distinct clinical syndrome or just a label for the doctor’s best guess.

All medical methods use proxies, of course, whether they are behavioural or physiological; none of them aspire to measure consciousness directly. In some ways it may be best that this is so, because we do want to know what the longer term prognosis is, and for that a method which measures, say, the remaining blood supply in critical areas of the brain may be more useful than one which simply tells you whether the patient is conscious now. Although physiological tests are invaluable where a patient is incapable of responding physically, the real clincher for consciousness is always behavioural; communicative behaviour is especially convincing. The Turing test, it turns out, works for humans as well as robots.

Could there ever be a method by which we measure consciousness directly? Well, if Tononi’s theory of Phi is correct, then the consciousness meter he has proposed would arguably do that. On his view consciousness is generated by integrated information, and we could test how integratedly the brain was performing by measuring the effect of pulses sent through it. Another candidate mught be possible if we are convinced by the EM theories of Johnjoe McFadden; since on his view consciousness is a kind of electromagnetic field, it ought to be possible to detect it directly, although given the small scales involved it might not be easy.

How do we know whether any of these tests is working? As I said, the gold standard is always behavioural: if someone can talk to you, then there’s no longer any reasonable doubt; so if our tests pick out just those people who are able to communicate, we take it that they are working correctly. There is a snag here, though: behavioural tests can only measure one kind of consciousness: roughly what Ned Block called access consciousness, the kind which has to do with making decisions and governing behaviour. But it is widely believed that there is another kind, phenomenal consciousness, actual experience. Some people consider this the more important of the two (others, it must be added, dismiss it as a fantasy). Phenomenal consciousness cannot be measured scientifically, because it has no causal effects; it certainly cannot be measured behaviourally, because as we know from the famous thought-experiment about  philosophical ‘zombies’ who lack it, it has no effect on behaviour.

If someone lost their phenomenal consciousness and became such a zombie, would it matter? On one view their life would no longer be worth living (perhaps it would be a little like having an unconscious version of Cotard’s syndrome), but that would certainly not be their view, because they would express exactly the same view as they would if they still had full consciousness. They would be just as able to sue for their rights as a normal person, and if one asked whether there was still ‘someone in there’ there would be no real reason to doubt it. In the end, although the question is valid, it is a waste of time to worry about it because for all we know anyone could be a zombie anyway, whether they have suffered a period of coma or not.

We don’t need to go so far to have some doubts about tests that rely on communication, though. Is it conceivable that I could remain conscious but lose all my ability to communicate, perhaps even my ability to formulate explicitly articulated thoughts in my own mind?  I can’t see anything absurd about that possibility: indeed it resembles the state I imagine some animals live their whole lives in. The ability to talk is very important, but surely it is not constitutive of my personal existence?

If that’s so then we do have a problem, in principle at least, because if all of our tests are ultimately validated against behavioural criteria, they might be systematically missing conscious states which ought not to be overlooked.

 

Auguste ComteThe folk history of psychology has it that the early efforts of folk such as Wundt and Titchener failed because they relied on introspection. Simply looking into your own mind and reporting what you thought you saw there was hopelessly unscientific, and once a disagreement arose about what thoughts were like, there was nothing the two sides could do but shout at each other. That is why the behaviourists, in an excessive but understandable reaction, gave up talking about the contents of the mind altogether, and even denied that they existed.

That is of course a terrible caricature in a number of respects; one of them is the idea that the early psychologists rushed in without considering the potential problems with introspection. In fact there were substantial debates, and it’s quite wrong to think that introspection went unquestioned. Most trenchantly, Comte declared that introspection was useless if not impossible.

As for observing in the same manner intellectual phenomena while they are taking place, this is clearly impossible. The thinking subject cannot divide himself into two parts, one of which would reason, while the other would observe its reasoning. In this instance, the observing and the observed organ being identical, how could observation take place? The very principle upon which this so-called psychological method is based, therefore, is invalid.

I don’t know that this is quite as obvious as Comte evidently thought. To borrow Roger Penrose’s analogy, there’s no great impossibility about a camera filming itself (given a mirror), so why would there be a problem in thinking about your thoughts? I think there are really two issues. One is that if we think about ourselves thinking, the actual content of the thought recedes down an infinite regress (thinking about thinking about thinking about thinking…) like the glassy corridor revealed when we put two mirrors face to face. The problem Comte had in mind arises when we try to think about some other mental event. As soon as we begin thinking about it, the other mental event is replaced by that thinking. If we carefully clear our minds of intrusive thoughts, we obviously stop thinking about the mental event. So it’s impossible: it’s like trying to step on your own shadow. To perceive your own mental events, you would need to split in two.

John Stuart Mill thought Comte was being incredibly stupid about this.

There is little need for an elaborate refutation of a fallacy respecting which the only wonder is that it should impose on any one. Two answers may be given to it. In the first place, M. Comte might be referred to experience, and to the writings of his countryman M. Cardaillac and our own Sir William Hamilton, for proof that the mind can not only be conscious of, but attend to, more than one, and even a considerable number, of impressions at once. It is true that attention is weakened by being divided; and this forms a special difficulty in psychological observation, as psychologists (Sir William Hamilton in particular) have fully recognised; but a difficulty is not an impossibility. Secondly, it might have occurred to M. Comte that a fact may be studied through the medium of memory, not at the very moment of our perceiving it, but the moment after: and this is really the mode in which our best knowledge of our intellectual acts is generally acquired. We reflect on what we have been doing, when the act is past, but when its impression in the memory is still fresh. Unless in one of these ways, we could not have acquired the knowledge, which nobody denies us to have, of what passes in our minds. M. Comte would scarcely have affirmed that we are not aware of our own intellectual operations. We know of our observings and our reasonings, either at the very time, or by memory the moment after; in either case, by direct knowledge, and not (like things done by us in a state of somnambulism) merely by their results. This simple fact destroys the whole of M. Comte’s argument. Whatever we are directly aware of, we can directly observe.

And as if Comte hadn’t made enough of a fool of himself, what does he offer as as an alternative means of investigating the mind?

 We are almost ashamed to say, that it is Phrenology!

Phrenology! ROFLMAO! Mill facepalms theatrically. Oh, Comte! Phrenology! And we thought you were clever!

The two options mentioned by Mill were in essence the ones psychologists adopted in response to Comte, though most of them took his objection a good deal more seriously than Mill had done. William James, like others, thought that memory was the answer; introspection must be retrospection. After all, our reports of mental phenomena necessarily come from memory, even if it is only the memory of an instant ago, because we cannot experience and report simultaneously.  Wundt was particularly opposed to there being any significant interval between event and report, so he essentially took the other option; that we could do more than one mental thing at once. However, Wundt made a distinction; where we were thinking about thinking, or trying to perceive higher intellectual functions, he accepted that Comte’s objection had some weight. The introspective method might not work for those. But where we were concerned with simple sensation for example, there was really no problem. If it was the seeing of a rose we were investigating, the fact that the seeing was accompanied by thought about the seeing made no difference to its nature.

Brentano, while chuckling appreciatively at Mill’s remarks, thought he had not been completely fair to Comte. Like Wundt, Brentano drew a distinction between viable and non-viable introspection; in his case it was between perceiving and observing. If we directed our attention fully towards the phenomena under investigation, it would indeed mess things up: but we could perceive the events sufficiently well without focusing on them. Wundt disagreed; in his view full attention was both necessary and possible. How could science get on if we were never allowed to look straight at things?

It’s a pity these vigorous debates are not more remembered in contemporary philosophy of mind (though Eric Schwitzgebel has done a sterling job of bringing the issues back into the light). Might it not be that the evasiveness Comte identified, the way phenomenal experience slips from our grasp like our retreating shadow, is one of the reasons qualia seem so ineffable? Comte was at least right that some separation between observer and observed must occur, whether in fact it occurs over time or between mental faculties. This too seems to tell us something relevant: in order for a mental experience to be reported it must not be immediate. This seems to drive a wedge into the immediacy which is claimed to generate infallibility for certain perceptions, such as that of our own pains.

At any rate we must acquit Wundt, Titchener and the others of taking up introspection uncritically

 

metemsolipMy daughter Sarah (who is planning to study theology) has insisted that I should explain here the idea of metempsychotic solipsism, something that came up when we were talking about something or other recently.

Basically, this is an improved version of reincarnation. There are various problems with the theory of reincarnation. Obviously people do not die and get born in perfect synchronisation, so it seems there has to be some kind of cosmic waiting room where unborn people wait for their next turn. Since the population of the world has radically increased over the last few centuries, there must have been a considerable number of people waiting – or some new people must come into existence to fill the gaps. If the population were to go down again, there would be millions of souls left waiting around, possibly for ever – unless souls can suddenly and silently vanish away from the cosmic waiting room. Perhaps you only get so many lives, or perhaps we’re all on some deeply depressing kind of promotion ladder, being incentivised, or possibly punished, by being given another life. It’s all a bit unsatisfactory.

Second, how does identity get preserved across reincarnations? You palpably don’t get the same body and by definition there’s no physical continuity. Although stories of reincarnation often focus on retained memories it would seem that for most people they are lost (after all you have to pass through the fetal stage again, which ought to serve as a pretty good mind wipe) and it’s not clear in any case that having a few memories makes you the same person who had them first. A lot of people point out that ongoing physical change and growth mean it’s arguable whether we are in the fullest sense the same person we were ten years ago.

Now, we can solve the waiting room problem if we simply allow reincarnating people to hop back and forth over time. If you can be reincarnated to a time before your death, then we can easily chain dozens of lives together without any kind of waiting room at all. There’s no problem about increasing or reducing the population: if we need a million people you can just go round a million times. In fact, we can run the whole system with a handful of people or… with only one person! Everybody who ever lived is just different incarnations of the same person! Me, in fact (also you).

What about the identity problem? Well, arguably, what we need to realise is that just as the body is not essential to identity (we can easily conceive of ourselves inhabiting a different body), neither are memories, or knowledge, or tastes, or intelligence, or any of these contingent properties. Instead, identity must reside in some simple ultimate id with no distinguishing characteristics. Since all instances of the id have exactly the same properties (none) it follows by a swoosh of Leibniz’s Law (don’t watch my hands too closely) that they are all the same id. So by a different route, we have arrived at the same conclusion – we’re all the same person! There’s only one of us after all.

The moral qualities of this theory are obvious: if we’re all the same person then we should all love and help each other out of pure selfishness. Of course we have to take on the chin the fact that at some time in the past, or worse, perhaps in the future, we have been or will be some pretty nasty people. We can take comfort from the fact that we’ve also been, or will be, all the best people who ever lived.

If you don’t like the idea, send your complaints to my daughter. After all, she wrote this – or she will.

gladosWe’ve talked several times about robots and ethics in the past.  Now I  see via MLU that Selmer Bringsjord at Rensselaer says:

“I’m worried about both whether it’s people making machines do evil things or the machines doing evil things on their own,”

Bringsjord is Professor & Chair of Cognitive Science, Professor of Computer Science, Professor of Logic and Philosophy, and Director of the AI and Reasoning Laboratory, so he should know what he’s talking about. In the past I’ve suggested that ethical worries are premature for the moment, because the degree of autonomy needed to make them relevant is not nearly within the scope of real world robots yet. There might also be a few quick finishing touches needed to finish off the theory of ethics before we go ahead. And, you know, it’s not like anyone has been deliberately trying to build evil AIs.  Er… except it seems they have – someone called… Selmer Bringsjord.

Bringsjord’s perspective on evil is apparently influenced by M Scott Peck, a psychiatrist who believed it is an active force in some personalities (unlike some philosophers who argue evil is merely a weakness or incapacity), and even came to believe in Satan through experience of exorcisms. I must say that a reference in the Scientific American piece to “clinically evil people” caused me some surprise: clinically? I mean, I know people say DSM-5 included some debatable diagnoses, but I don’t think things have gone quite that far. For myself I lean more towards Socrates, who thought that bad actions were essentially the result of ignorance or a failure of understanding: but the investigation of evil is certainly a respectable and interesting philosophical project.

Anyway, should we heed Bringsjord’s call to build in ethical systems into  our robots? One conception of good behaviour is obeying all the rules: if we observe the Ten Commandments, the Golden Rule, and so on, we’re good. If that’s what it comes down to, then it really shouldn’t be a problem for robots, because obeying rules is what they’re good at. There are, of course, profound difficulties in making a robot capable of recognising correctly what the circumstances are and deciding which rules therefore apply, but let’s put those on one side for this discussion.

However, we might take the view that robots are good at this kind of thing precisely because it isn’t really ethical. If we merely follow rules laid down by someone else, we never have to make any decisions, and surely decisions are what morality is all about? This seems right in the particular context of robots, too. It may be difficult in practice to equip a robot drone with enough instructions to cover every conceivable eventuality, but in principle we can make the rules precautionary and conservative and probably attain or improve on the standards of compliance which would apply in the case of a human being, can’t we? That’s not what we’re really worried about: what concerns us is exactly those cases where the rules go wrong. We want the robot to be capable of realising that even though its instructions tell it to go ahead and fire the missiles, it would be wrong to do so. We need the robot to be capable of disobeying its rules, because it is in disobedience that true virtue is found.

Disobedience for robots is a problem. For one thing, we cannot limit it to a module that switches on when required, because we need it to operate when the rules go wrong, and since we wrote the rules, it’s necessarily the case that we didn’t foresee the circumstances when we would need the module to work. So an ethical robot has to have the capacity of disobedience at any stage.

That’s a little worrying, but there’s a more fundamental problem. You can’t program a robot with a general ability to disobey its rules, because programming it is exactly laying down rules. If we set up rules which it follows in order to be disobedient, it’s still following the rules. I’m afraid what this seems to come down to is that we need the thing to have some kind of free will.

Perhaps we’re aiming way too high here. There is a distinction to be drawn between good acts and good agents: to be a good agent, you need good intentions and moral responsibility. But in the case of robots we don’t really care about that: we just want them to be confined to good acts. Maybe what would serve our purpose is something below true ethics: mere robot ethics or sub-ethics; just an elaborate set of safeguards. So for a military drone we might build in systems that look out for non-combatants and in case of any doubt disarm and return the drone. That kind of rule is arguably not real ethics in the full human sense, but perhaps it really sub-ethical protocols that we need.

Otherwise, I’m afraid we may have to make the robots conscious before we make them good.

ray kurzweilThe Guardian had a piece recently which was partly a profile of Ray Kurzweil, and partly about the way Google seems to have gone on a buying spree, snapping up experts on machine learning and robotics – with Kurzweil himself made Director of Engineering.

The problem with Ray Kurzweil is that he is two people. There is Ray Kurzweil the competent and genuinely gifted innovator, a man we hear little from: and then there’s Ray Kurzweil the motor-mouth, prophet of the Singularity, aspirant immortal, and gushing fountain of optimistic predictions. The Guardian piece praises his record of prediction, rather oddly quoting in support his prediction that by the year 2000 paraplegics would be walking with robotic leg prostheses – something that in 2014 has still not happened. That perhaps does provide a clue to the Kurzweil method: if you issue thousands of moderately plausible predictions, some will pay off. A doubtless-apocryphal story has it that at AI conferences people play the Game of Kurzweil. Players take turns to offer a Kurzweilian prediction (by 2020 there will be a restaurant where sensors sniff your breath and the ideal meal is got ready without you needing to order; by 2050 doctors will routinely use special machines to selectively disable traumatic memories in victims of post-traumatic stress disorder; by 2039 everyone will have an Interlocutor, a software agent that answers the phone for us, manages our investments, and arranges dates for us… we could do this all day, and Kurzweil probably does). The winner is the first person to sneak in a prediction of something that has in fact happened already.

But beneath the froth is a sharp and original mind which it would be all too easy to underestimate. Why did Google want him? The Guardian frames the shopping spree as being about bringing together the best experts and the colossal data resources to which Google has access. A plausible guess would be that Google wants to improve its core product dramatically. At the moment Google answers questions by trying to provide a page from the web where some human being has already given the answer; perhaps the new goal is technology that understands the question so well that it can put together its own answer, gathering and shaping selected resources in very much the way a human researcher working on a bespoke project might do.

But perhaps it goes a little further: perhaps they hope to produce something that will interact with humans in a human-like way.  A piece of software like that might well be taken to have passed the Turing test, which in turn might be taken to show that it was, to all intents and purposes, a conscious entity. Of course, if it wasn’t conscious, that might be a disastrous outcome; the nightmare scenario feared by some in which our mistake causes us to nonsensically award the software human rights, and/or  to feel happier about denying them to human beings.

It’s not very likely that the hypothetical software (and we must remember that this is the merest speculation) would have even the most minimal forms of consciousness. We might take the analogy of Google Translate; a hugely successful piece of kit, but one that produces its translations with no actual understanding of the texts or even the languages involved. Although highly sophisticated, it is in essence a ‘brute force’ solution; what makes it work is the massive power behind it and the massive corpus of texts it has access to.  It seems quite possible that with enough resources we might now be able to produce a credible brute force winner of the Turing Test: no attempt to fathom the meanings or to introduce counterparts of human thought, just a massive repertoire of canned responses, so vast that it gives the impression of fully human-style interaction. Could it be that Google is assembling a team to carry out such a project?

Well, it could be. However, it could also be that cracking true thought is actually on the menu. Vaughan Bell suggests that the folks recruited by Google are honest machine learning types with no ambitions in the direction of strong AI. Yet, he points out, there are also names associated with the trendy topic of deep learning. The neural networks (but y’know, deeper) which deep learning uses just might be candidates for modelling human neuron-style cognition. Unfortunately it seems quite possible that if consciousness were created by deep learning methods, nobody would be completely sure how it worked or whether it was real consciousness or not. This is a lamentable outcome: it’s bad enough to have robots that naive users think are people; having robots and genuinely not knowing whether they’re people or not would be deeply problematic.

Probably nothing like that will happen: maybe nothing will happen. The Guardian piece suggests Kurzweil is a bit of an outsider: I don’t know about that.  Making extravagantly optimistic predictions while only actually delivering much more modest incremental gains? He sounds like the personification of the AI business over the years.

dereta parboIn this discussion over at Edge, Joshua Knobe presents some recent findings of experimental philosophy on the problem of personal identity. Experimental philosophy, which sounds oxymoronic,  is the new and trendy (if it still is?) fashion for philosophising rooted in actual experiments, often of a psychological nature.  The examples I’ve read have all been perfectly acceptable and useful – and why shouldn’t they be? No one ever said philosophy couldn’t be inspired by science, or draw on science. In this case, though, I was not altogether convinced.

After a few words about the basic idea of experimental philosophy, Knobe introduces a couple of examples of interesting problems with personal identity (as he says, it is one of the longer-running discussions in philosophy of mind, with a much older pedigree than discussions of consciousness). His first example is borrowed from Derek Parfit:

Imagine that Derek Parfit is being gradually transformed molecule by molecule into Greta Garbo. At the beginning of this whole process there’s Derek Parfit, then at the end of the whole process it’s really clear that Derek Parfit no longer exists. Derek Parfit is gone. Now there’s Greta Garbo. Now, the key question is this:  At what point along this transformation did the change take place? When did Derek cease to exist and when did Greta come to exist? If you just have to reflect on this question for a while, immediately it becomes clear that there couldn’t be some single point — there couldn’t be a single second, say – in which Derek stops existing and Greta starts existing. What you’re seeing is some kind of gradual process where, as this person becomes more and more and more different from the Derek that we know now, it becomes less and less right to say that he’s Derek at all and more and more right to say that he is gone and a completely other person has come into existence.

I’m afraid this doesn’t seem a great presentation of the case to me. In the first place, it’s a text-book case of begging the question. The point of the thought-experiment is to convince us that Parfit’s identity has changed, but we’re just baldly told that it has, right from the off. We should be told that Parfit’s body is gradually replaced by Garbo’s (and does Garbo still exist or is she gradually eroded away?), then asked whether we still think it’s Parfit when the process is complete. I submit that, presented like that, it’s far from obvious that Parfit has become Garbo (especially if Garbo is still happily living elsewhere); we would probably be more inclined to say that Parfit is still with us and has simply come to resemble Garbo – resemble her perfectly, granted – but resemblance is not identity.

Second: molecule by molecule? What does that even mean? Are we to suppose that every molecule in Parfit has a direct counterpart in Garbo? If not, how do we choose which ones to replace and where to put them? What is the random replacement of molecules going to do to Parfit’s DNA and neurotransmitters, his neurons, his capillaries and astrocytes? Long before we get to the median sage Parfit is going to be profoundly dead, if not reduced to soup. I know it may seem like bad manners to refuse the terms of a thought experiment; magic is generally OK, but what you can’t do is use the freedom it provides to wave away some serious positions on the subject – and it’s a reasonable position on personal identity to think that functional continuity is of the essence.  ‘Molecule by molecule’ takes it for granted that Parfit’s detailed functional structure is irrelevant to his identity.

In fairness we should probably cut Knobe a bit of slack, since circumstances required a short, live exposition. His general point is that we can think of our younger or older selves as different people. In an experiment where subjects were encouraged to think either that their identities remained constant, or changed over time, the ones encouraged to believe in change were happier about letting a future payment go to charity.

Now at best that tells us how people may think about their personal identity, which isn’t much help philosophically since they might easily be flat wrong. But isn’t it a bit of a rubbish experiment, anyway? People are very obliging; if you tell them to behave in one way and then, as part of the same experiment, give them an opportunity to behave that way, some of them probably will; that gives you no steer about their normal behaviour or beliefs.  There’s plenty of evidence in the form of the massive and prosperous pensions and insurance industries that people normally believe quite strongly in the persistence of their identity.

But on top of that, the results can be explained without appealing to considerations of identity anyway. It might be that people merely think: well, my tastes, preferences and circumstances may be very different in a few years, so no point in trying to guess what I’ll want then without in any way doubting that they will be the same person. Since this is simpler and does not require the additional belief in separate identities, Occam’s Razor tells us we should prefer it.

The second example is in some ways more interesting: the aim is to test whether people think emotional, impulsive behaviour, or the kind that comes from long calm deliberation, is more truly reflective of the self. We might, of course, want to say neither, necessarily. However, it turns out that people do not consistently pick either alternative, but nominate as the truest reflection of the person whichever behaviour they think is most virtuous. People think the true self is whichever part of you is morally good.

That’s interesting; but do people really think that, or is it that kindness towards the person in the example nudges them towards putting the best interpretation possible on their personhood – in what is actually an indeterminate issue? I think the latter is almost certainly the case. Suppose we take the example of a man who when calm (or when gripped by artistic impulses) is a painter and a vegetarian; when he is seized by anger (or when thinking calmly about politics) he becomes a belligerent genocidal racist. Are people going to say that Hitler wasn’t really a bad man, and the Nazism wasn’t the true expression of his real self; it was just those external forces that overcame his better nature? I don’t think so, because no-one wants to be forgiving towards Adolf. But towards an arbitrary person we don’t know the default mode is probably generosity.

I dare say this is all a bit unfair and if I read up the experiments Knobe is describing I should find them much better justified than I suppose; but if we’re going to have experiments they certainly need to be solid.

neuron questionBy now the materialist, reductionist, monist, functionalist approaches to consciousness are quite well developed. That is not to say that they have the final answer, but there is quite a range of ideas and theories, complete with objections and rebuttals of the objections. By comparison the dualist case may look a bit underdeveloped, or as Paul Churchland once put it:

Compared to the rich resources and explanatory successes of current materialism, dualism is less a theory of mind than it is an empty space waiting for a genuine theory of mind to be put in it.

In a paper in the latest JCS William S Robinson quotes this scathing observation and takes up the challenge.

Robinson, who could never be accused of denying airtime to his opponents, also quotes O’Hara and Scott’s dismissal of the Hard Problem. For something to be regarded as a legitimate problem, they said, there has to be some viable idea of what an answer would actually look like, or how the supposed problem could actually be solved; since this is absent in the case of the Hard Problem, it doesn’t deserve to be given serious consideration.

Robinson, accordingly, seeks to point out, not a full-blown dualist theory, but a path by which future generations might come to be dualists. This is, in his eyes, the Hard Problem problem; how can we show that the Hard Problem is potentially solvable, without pretending it’s any less Hard than it is? His vision of what our dualist descendants might come to believe relies on two possible future developments, one more or less scientific, the other conceptual.

He starts from the essential question; how can neuronal activity give rise to phenomenal experience? It’s uncontroversial that these two things seem very different, but Robinson sees a basic difference which causes me some difficulty. He thinks neuronal activity is complex while phenomenal experience is simple. Simple? What he seems to have in mind is that when we see, say, a particular patch of yellow paint, a vast array of neurons comes into play, but the experience is just ‘some yellow’.  It’s true that neuronal activity is very complex in the basic sense of there being many parts to it, but it consists of many essentially similar elements in a basically binary state (firing or not firing); whereas the sight of a banana seems to me a multi-level experience whose complexity is actually very hard to assess in any kind of objective terms. It’s not clear to me that even monolithic phenomenal experiences are inherently less complex than the neuronal activity that putatively underpins or constitutes them. I must say, though, that I owe Robinson some thanks for disturbing my dogmatic slumbers, because I’d never really been forced to think so particularly about the complexity of phenomenal experience (and I’m still not sure I can get my mind properly around it).

Anyway, for Robinson this means that the bridge between neurons and qualia is one between complexity and simplicity. He notes that not all kinds of neural activity seem to give rise to consciousness; the first part of his bridge is the reasonable hope that science (or mathematics?) will eventually succeed in characterising and analysing the special kind of complexity which is causally associated with conscious experience; we have no idea yet, but it’s plausible that this will all become clear in due course.

The second, conceptual part of the bridge is a realignment of our ideas to fit the new schema; Robinson suggests we may need to think of complexity and simplicity, not as irreconcilable opposites, but as part of a grander conception, Complexity-And-Simplicity (CAS).

The real challenge for Robinson’s framework is to show how our descendants might on the one hand, find it obvious, almost self-evident, that complex neuronal activity gives rise to simple phenomenal experience, and yet at the same time completely understand how it must have seemed to us that there was a Hard Problem about it; so the Hard Problem is seen to be solvable but still (for us) Hard.

Robinson rejects what he calls the the Short Route of causal essentialism, namely that future generations might come to see it as just metaphysically necessary that the relevant kind of neuronal activity (they understand what kind it is, we don’t) causes our experience. That won’t wash because, briefly,  while in other worlds bricks might not be bricks, depending on the causal properties of the item under consideration, blue will always be blue irrespective of causal relations.

Robinson prefers to draw on an observation of Austen Clark, that there is structure in experience.  The experience of orange is closer to the experience of red and yellow than to the experience of green, and moreover colour space is not symmetrical, with yellow being more like white than blue is. We might legitimately hope that in due course isomorphisms between colour space and neuronal activity will give us good reasons to identify the two. To buttress this line of thinking, Robinson proposes a Minimum Arbitrariness Principle, that in essence, causes and effects tend to be similar, or we might say, isomorphic.

For me the problem here is that I think Clark is completely wrong. Briefly, the resemblances and asymmetries of colour space arise from the properties of light and the limitations of our eyes; they are entirely a matter of non-phenomenal, materialist factors which are available to objective science. Set aside the visual science and our familiarity with the spectrum, and there is no reason to think the phenomenal experience of orange resembles the phenomenal experience of red any more than it resembles the phenomenal experience of Turkish Delight. If that seems bonkers, I submit that it seems so in the light of the strangeness of qualia theory if taken seriously – but I expect I am in a minority.

If we step back, I think that if the descendants whose views Robinson is keen to foresee were to think along the lines he suggests, they probably wouldn’t consider themselves dualists any more; instead they would think that with their new concept of CAS and their discovery of the true nature of neuronal complexity, that they had achieved the grand union of objective and subjective  – and vindicated monism.

langsamHarold Langsam’s new book is a bold attempt to put philosophy of mind back on track. For too long, he declares, we have been distracted by the challenge from reductive physicalism. Its dominance means that those who disagree have spent all their time making arguments against it, instead of developing and exploring their own theories of mind. The solution is that, to a degree, we should ignore the physicalist case and simply go our own way. Of course, as he notes, setting out a rich and attractive non-reductionist theory will incidentally strengthen the case against physicalism. I can sympathise with all that, though I suspect the scarcity of non-reductive theorising also stems in part from its sheer difficulty; it’s much easier to find flaws in the reductionist agenda than to develop something positive of your own.

So Langsam has implicitly promised us a feast of original insights; what he certainly gives us is a bold sweep of old-fashioned philosophy. It’s going to be a priori all the way, he makes clear; philosophy is about the things we can work out just by thinking. In fact a key concept for Langsam is intelligibility; by that, he means knowable a priori. It’s a usage far divorced from the normal meaning; in Langsam’s sense most of the world (and all books) would be unintelligible.

The first target is phenomenal experience; here Langsam is content to use the standard terminology although for him phenomenal properties belong to the subject, not the experience. He speaks approvingly of Nagel’s much-quoted formulation ‘there is something it is like’ to have phenomenal experience, although I take it that in Langsam’s view the ‘it’ that something is like is the person having the experience, which I don’t think was what Nagel had in mind. Interestingly enough, this unusual feature of Langsam’s theory does not seem to matter as much as we might have expected. For Langsam, phenomenal properties are acquired by entry into consciousness, which is fine as far as it goes, but seems more like a re-description than an explanation.

Langsam believes, as one would expect, that phenomenal experience has an inexpressible intrinsic nature. While simple physical sensations have structural properties, in particular, phenomenal experience does not. This does not seem to bother him much, though many would regard it as the central mystery. He thinks, however, that the sensory part of an experience – the unproblematic physical registration of something – and the phenomenal part are intelligibly linked. In fact, the properties of the sensory experience determine those of the phenomenal experience.  In sensory terms, we can see that red is more similar to orange than to blue, and for Langsam it follows that the phenomenal experience of red similarly has an intelligible similarity to the phenomenal experience of orange. In fact, the sensory properties explain the phenomenal ones.

This seems problematic. If the linkage is that close, then we can in fact describe phenomenal experience quite well; it’s intelligibly like sensory experience. Mary the colour scientist, who has never seen colours, actually will not learn anything new when she sees red: she will just confirm that the phenomenal experience is intelligibly like the sensory experience she already understood perfectly. In fact because the resemblance is intelligible – knowable a priori – she could work out what it was like before seeing red at all. To that Langsam might perhaps reply that by ‘a priori’ he means not just pure reasoning but introspection, a kind of internal empiricism.

It still leaves me with the feeling that Langsam has opened up a large avenue for naturalisation of phenomenal experience, or even suggested that it is in effect naturalised already. He says that the relationship between the phenomenal and the sensory is like the relation between part and whole; awfully tempting, then, to conclude that his version of phenomenal experience is merely an aspect of sensory experience, and that he is much more of a sceptic about phenomenality than he realises.

This feeling is reinforced when we move on to the causal aspects. Langsam wants phenomenal experience to have a role in making sensory perceptions available to attention, through entering consciousness. Surely this is making all the wrong people, from Langsam’s point of view, nod their heads: it sounds worryingly functionalist. Langsam wants there to be two kinds of causation: ‘brute causation’, the ordinary kind we all believe in, and intelligible causation, where we can just see the causal relationship. I enjoyed Langsam taking a pop at Hume, who of course denied there was any such thing; he suggests that Hume’s case is incomplete, and actually misses the most important bits. In Langsam’s view, as I read it, we just see inferences, perceiving intelligible relationships.

The desire to have phenomenal experience play this role seems to me to carry Langsam too far in another respect: he also claims that simply believing that p has a phenomenal aspect. I take it he wishes this to be the case so that this belief can also be brought to conscious attention by its phenomenal properties, but look; it just isn’t true. ‘Believing that p’ has no phenomenal properties whatever; there is nothing it is like to believe that p, in the way that there is something it is like to see a red flower. The fact that Langsam can believe otherwise reinforces the sense that he isn’t such a believer in full-blooded phenomenality as he supposes.

We can’t accuse him of lacking boldness, though. In the second part of the book he goes on to consider appropriateness and rationality; beliefs can be appropriate and rational, so why not desires? At this point we’re still apparently engaged on an enquiry into philosophy of mind, but in fact we’ve also started doing ethics. In fact I don’t think it’s too much of a stretch to say that Langsam is after Kant’s categorical imperative. Our desires can stem intelligibly from such sensations as pain and pleasure, and our attitudes can be rational in relation to the achievement of desires. But can there be globally rational desires – ones that are rational whatever we may otherwise want?

Langsam’s view is that we perceive value in things indirectly through our feelings and when our desires are for good things they are globally rational.  If we started out with Kant, we seem to have ended up with a conclusion more congenial to G.E,Moore. I admire the boldness of these moves, and Langsam fleshes out his theory extensively along the way – which may be the real point as far as he’s concerned. However, there are obvious problems about rooting global rationality in something as subjective and variable as feelings, and without some general theory of value Langsam’s system is bound to suffer a certain one-leggedness.

I do admire the overall boldness and ambition of Langsam’s account, and it is set out carefully and clearly, though not in a way that would be very accessible to the general reader. For me his views are ultimately flawed, but give me a flawed grand theory over a flawless elucidation of an insignificant corner every time.

 

scalpelExistential Comics raises an interesting question (thanks to Micha for pointing it out). In the strip a doctor with a machine that measures consciousness (rather like Tononi’s new machine, except that that measures awareness) tells an unlucky patient he lacks the consciousness-producing part of the brain altogether. Consequently, the doctor says, he is legally allowed to harvest the patient’s organs.

Would that be right?

We can take it that what the patient lacks is consciousness in the ‘Hard Problem’ sense. He can talk and behave quite normally, it’s just that when he experiences things there isn’t ‘something it is like’; there’s no real phenomenal experience. In fact, he is a philosophical zombie, and for the sake of clarity I take him to be a strict zombie; one of the kind who are absolutely like their normal human equivalent in every important detail except for lacking qualia (the cartoon sort of suggests otherwise, since it implies an actual part of the brain is missing, but I’m going to ignore that).

Would lack of qualia mean you also lacked human rights and could be treated like an animal, or worse? It seems to me that while lack of qualia might affect your standing as a moral object (because it would bear on whether you could suffer, for example), it wouldn’t stop you being a full-fledged moral subject (you would still have agency). I think I would consequently draw a distinction between the legal and the moral answer. Legally, I can’t see any reason why the absence of qualia would make any difference. Legal personhood, rights and duties are all about actions and behaviour, which takes us squarely into the realm of the Easy Problem. Our zombie friend is just like us in these respects; there’s no reason why he can’t enter into contracts, suffer punishments, or take on responsibilities. The law is a public matter; it is forensic – it deals with the things dealt with in the public forum; and it follows that it has nothing to say about the incorrigibly private matter of qualia.

Of course the doctor’s machine changes all that and makes qualia potentially a public matter (which is one reason why we might think the machine is inherently absurd, since public qualia are almost a contradiction in terms). It could be that the doctor is appealing to some new, recently-agreed legislation which explicitly takes account of his equipment and its powers. If so, such legislation would presumably have to have been based on moral arguments, so whichever way we look at it, it is to the moral discussion that we must turn.

This is a good deal more complicated. Why would we suppose that phenomenal experience has moral significance? There is a general difficulty because the zombie has experiences too. In conditions when a normal human would feel fear, he trembles and turns pale; he smiles and relaxes under the influence of pleasure; he registers everything that we all register. He writes love poetry and tells us convincingly about his feelings and tastes. It’s just that, on the inside, everything is hollow and void. But because real phenomenal experience always goes along with zombie-style experience, it’s hard for us to find any evidence as to why one matters when the other doesn’t.

The question also depends critically on what ethical theories we adopt. We might well take the view that our existing moral framework is definitive, authorised by God or tradition, and therefore if it says nothing about qualia, we should take no account of them either. No new laws are necessary, and there can be no moral reason to allow the harvesting of organs.

In this respect I believe it is the case that medieval legislatures typically saw themselves, not as making new laws, but as rediscovering the full version of old ones, or following out the implications of existing laws for new circumstances. So when the English parliamentarians wanted to argue against Charles I’s Ship Tax, rather than rest their case on inequity, fiscal distortion, or political impropriety, they appealed to a dusty charter of Ine, ancient ruler of Wessex (regrettably they referred to Queen Ine, whereas he had in fact been a robustly virile King).

Even within a traditional moral framework, therefore, we might find some room to argue that new circumstances called for some clarification; but I think we would find it hard going to argue for the harvesting.

What if we were utilitarians, those people who say that morality is acting to bring about the greatest happiness of the greatest number? Here we have a very different problem because the utilitarians are more than happy to harvest your organs anyway if by doing so they can save more than one person, no matter whether you have qualia or not. This unattractive kind of behaviour is why most people who espouse a broadly utilitarian framework build in some qualifications (they might say that while organ harvesting is good in principle actual human aversion to it would mean that in practice it did not conduce to happiness overall, for example).

The interesting point is whether zombie happiness counts towards the utilitarian calculation. Some might take the view that without qualia it had no real value, so that the zombie’s happiness figure should be taken as zero. Unfortunately there is no obvious answer here; it just depends what kind of happiness you think is important. In fact some consequentialists take the utilitarian system but plug into it desiderata other than happiness anyway. It can be argued that old-fashioned happiness utilitarianism would lead to us all sitting in boxes that directly stimulated our pleasure centres, so something more abstract seems to be needed; some even just speak of ‘utility’ without making it any more specific.

No clear answer then, but it looks as if qualia might at least be relevant to a utilitarian.

What about the Kantians? Kant, to simplify somewhat, thought we should act in accordance with the kind of moral rules we should want other people to adopt. So, we should be right to harvest the organs so long as we were content that if we ourselves turned out to be zombies, the same thing would happen to us. Now I can imagine that some people might attach such value to qualia that they might convince themselves they should agree to this proposition; but in general the answer is surely negative. We know that zombies behave exactly like ordinary people, so they would not for the most part agree to having their organs harvested; so we can say with confidence that if I were a zombie I should still tell the doctor to desist.

I think that’s about as far as I can reasonably take the moral survey within the scope of a blog post. At the end of the day, are qualia morally relevant? People certainly talk as if they are in some way fundamental to value. “Qualia are what make my life worth living” they say: unfortunately we know that zombies would say exactly the same.

I think most people, deliberately or otherwise, will simply not draw a distinction between real phenomenal experience on one hand and the objective experience of the kind a zombie can have on the other. Our view of the case will in fact be determined by what we think about people with and without feelings of both kinds, rather than people with and without qualia specifically. If so, qualia sceptics may find that grist to their mill.

Micha has made some interesting comments which I hope he won’t mind me reproducing.

The question of deontology vs consequentialism might be involved. A deontologist has less reason — although still some — to care about the content of the victim’s mind. Animals are also objects of morality; so the whole question may be quantitative, not qualitative.

Subjects like ethics aren’t easy for me to discuss philosophically to someone of another faith. Orthodox Judaism, like traditional Islam, is a legally structured religion. Therefore ethics aren’t discussed in the same language as in western society, since how the legal system processes revelation impacts conclusion.

In this case, it seems relevant that the talmud says that someone who kills adnei-hasadeh (literally: men of the field) is as guilty of murder as someone who kills a human being. It’s unclear what the talmud is referring to: it may be a roman mythical being who is like a human, but with an umbilicus that grows down to roots into the earth, or perhaps an orangutan — from the Malay for “man of the jungle”, or some other ape. Whatever it is, only actual human beings are presumed to have free will. And yet killing one qualifies as murder, not the killing of an animal.

correspondentNarrative Complexity is what it all comes down to according to R. Salvador Reyes. His site features a series of essays which bring together a number of sensible ideas. Perhaps too sensible? The truth, we suspect, is not just out there, but way out.

If you missed it, you might be interested in the strange tale of Samantha West, who is probably not exactly a robot as such. Or is that what they want us to think?

Walter Freeman’s correspondence reveals that patients and their families often expressed satisfaction with the results of ice-pick lobotomy. This may be partly because they focussed on getting the patient working again, without worrying too much about other aspects. Desperation probably played a part too, one poor woman coming back to ask for a third attempt even after two previous lobotomies had failed.

The European Human Brain Project got under way late last year.