Posts tagged ‘consciousness’

sorates and branestawmQuentin Ruyant has written a thoughtful piece about quantum mechanics and philosophy of mind: in a nutshell he argues both that quantum theory may be relevant to the explanation of consciousness and that consciousness may be relevant to the interpretation of quantum theory.

Is quantum theory relevant to consciousness? Well. of course some people have said so, notably Sir Roger Penrose and Stuart Hameroff.  I think Ruyant is right, though, that the majority of philosophers and probably the majority of physicists dismiss the idea that quantum theory might be needed to explain consciousness. People often suggest that the combination of the two only appeals because both are hard to explain: ‘here’s one mystery and here’s another: maybe one explains the other’. Besides, people say, the brain is far too big and hot and messy for anything other than classical physics to be required

In making the case for the relevance of quantum theory, Ruyant relies on the Hard Problem.  His position is that the Hard Problem is not biological but a matter of physics, whereas the Easy Problem, to do with all the scientifically tractable aspects of consciousness, can be dealt with by biology or psychology.

Actually, turning aside from the main thread of Ruyant’s argument, there are some reasons to suggest that quantum physics is relevant to the Easy Problem. Penrose’s case, in fact, seems to suggest just that: in his view consciousness is demonstrably non-computable and some kind of novel quantum mechanics is his favoured candidate to fill the gap. Penrose’s examples, things like solving mathematical problems, look like ‘Easy’ Problem matters to me.

Although I don’t think anyone (including me) advocates the idea, it also seems possible to argue that the ‘spooky action at a distance’ associated with quantum entanglement might conceivably have something to tell us about intentionality and its remarkable power to address things that are remote and not directly connected with us.

Anyway, Ruyant is mainly concerned with the Hard Problem, and his argument is that metaphysics and physics are closely related. Topics like the essential nature of physical things straddle the borderline between the two subjects, and it is not at all implausible therefore that the deep physics of quantum mechanics might shed light on the deep metaphysics of phenomenal experience. It seems to me a weakish line of argument, possibly tinged with a bit of prejudice: some physicists are inclined to feel that while their subject deals with the great fundamentals, biology deals only with the chance details of life; sort of a more intellectual kind of butterfly collecting.  That kind of thinking is not really well founded, and it seems particularly odd to think that biology is irrelevant when considering a phenomenon that, so far as we know, appears only in animals and is definitely linked very strongly with the operation of the brain. John Searle for one argues that ‘Hard Problem’ consciousness arises from natural biological properties of brain tissue. We don’t yet know what those properties are, but in his view it’s absurd to think that the job of nerves could equally well be performed by beer cans and string. Ruth Millikan, somewhat differently, has argued that consciousness is purely biological in nature, arising from and defined by evolutionary needs.

I think the truth is that it’s difficult to get anywhere at this meta-theoretical level:  we don’t really decide what kind of theory is most likely to be right and then concentrate on that area; we decide what the true theory most likely is and then root for the kind of theory it happens to be. That, to a great extent, is why quantum theories are not very popular: no-one has come up with a particular one that is cogent and appealing.  It seems to me that Ruyant likes the idea of physics-based theories because he favours panpsychism, or panphenomenalism, and so is inclined to think that the essential nature of matter is likely to be the right place to look for a theory.

To be honest, though, I doubt whether any kind of science can touch the Hard Problem.  It’s about entities that have no causal properties and are ineffable: how could empirical science ever deal with that? It might well be that a scientist will eventually give us the answer, but if so it won’t be by doing science, because neither classical nor quantum physics can really touch the inexpressible.

Actually, though there is a long shot.  If Colin McGinn is partly on the right track, it may be that consciousness seems mysterious to us simply because we’re not looking at it the right way: our minds won’t conceptualise it correctly. Now the same could be true of quantum theory. We struggle with the interpretation of quantum mechanics, but what if we could reorient our brains so that it simply seemed natural, and we groped instead for an acceptable ‘interpretation’ of spooky classical physics? If we could make such a transformation in our mental orientation, then perhaps consciousness would make sense too? It’s possible, but we’re back to banging two mysteries together in the hope that some spark will be generated.

Ruyant’s general case, that metaphysicians should be informed by our best physics is hard to argue with. At the moment few philosophers really engage with the physics and few physicists really grasp the philosophy. Why do philosophers avoid quantum physics? Partly, no doubt, just because it’s difficult, and relies on mathematics which few philosophers can handle. Partly also, I think there’s an unspoken fear that in learning about quantum physics your intuitions will be trained into accepting a particular weltanschauung that might not be helpful. Connected with that is the fear that quantum physics isn’t really finished or definitive. Where would I be if I came up with a metaphysical system that perfectly supported quantum theory and then a few years later it turns out that I should have been thinking in terms of string theory? Metaphysicians cross their fingers and hope they can deal with the key issues at a level of generality that means they won’t be rudely contradicted by an unexpected advance in physics a few years later.

I suppose what we really need is someone who can come up with a really good specific theory that shows the value of metaphysics informed by physics, but few people are qualified to produce one. I must say that Ruyant seems to be an exception, with an excellent grasp of the theories on both sides of the divide. Perhaps he has a theory of consciousness in his back pocket…?

claustrumDoctors at George Washington found by chance recently that stimulating a patient’s claustrum served to disrupt consciousness temporarily (abstract). The patient was being treated for epilepsy, and during this kind of surgery it is normal to use an electrode to stimulate areas of the brain in the target area before surgery to determine their role and help ensure the least possible damage is done to important functions. The claustrum is a sheet-like structure which seems to be well connected to many parts of the brain; Crick and Koch suggested it might be ‘the conductor of the orchestra’ of consciousness.

New Scientist reported this as the discovery of the ‘on/off’ switch for consciousness; but that really doesn’t seem to be the claustrum’s function: there’s no reason at the moment to suppose it is involved in falling asleep, or anaesthesia, or other kinds of unconsciousness, The on/off idea seems more like a relatively desperate attempt to explain the discovery in layman’s terms, reminiscent of the all-purpose generic tabloid newspaper technology report in Michael Frayn’s The Tin Men:

British scientists have developed a “magic box”, it was learned last night. The new wonder device was tested behind locked doors after years of research. Results were said to have exceeded expectations… …The device is switched on and off with a switch which works on the same principle as an ordinary domestic light switch…

Actually, one of the most interesting things about the finding is that the state the patient entered did not resemble sleep or any of those other states; she did not collapse or close her eyes, but instantly stopped reading and became unresponsive – although if she had been asked to perform a repetitive task before stimulation started, she would continue for a few seconds before tailing off. On some occasions she uttered a few incoherent syllables unprompted. This does sound more novel and potentially more interesting than a mere on/off switch. She was unable to report what the experience was like as she had no memory of it afterwards – that squares with the idea that consciousness was entirely absent during stimulation, though it’s fair to note that part of her hippocampus, which has an important role in memory formation, had already been removed.

Could Crick and Koch now be vindicated? It seems likely in part: the claustrum seems at least to have some important role – but it’s not absolutely clear that it is a co-ordinating one. One of the long-running problems for consciousness has been the binding problem: how the different sensory inputs, processed and delivered at different speeds, somehow come together into a smoothly co-ordinated experience. It could be that the claustrum helps with this, though some further explanation would be needed. As a long shot, it might even be that the claustrum is part of the ‘Global Workspace’ of the mind hypothesised by Bernard Baars, an idea that is still regularly invoked and quoted.

But we must be cautious. All we really know is that stimulating the claustrum disrupted consciousness. That does not mean consciousness happens in the claustrum. If you blow up a major road junction near a car factory, production may cease, but it doesn’t mean that the junction was where the cars were manufactured. Looking at it sceptically we might note that since the claustrum is well connected it might provide an effective way of zapping several important areas at once, and it might be the function of one or more of these other areas that is essential to sustaining consciousness.

However, it is surely noteworthy that a new way of being unconscious should have been discovered. It seems an unprecedentedly pure way, with a very narrow focus on high level activity, and that does suggest that we’re close to key functions. It is ethically impossible to put electrodes in anyone’s claustrum for mere research reasons, so the study cannot be directly replicated or followed up; but perhaps the advance of technology will provide another way.

bot Botprize is a version of the Turing Test for in-game AIs: they don’t have to talk, just run around playing Unreal Tournament (a first-person shooter game) in a way that convinces other players that they are human. In the current version players use a gun to tag their opponents as bots or humans; the bots, of course, do the same.

The contest initially ran from 2008 up to 2012; in the last year, two of the bots exceeded the 50% benchmark of humanness. The absence of a 2013 contest might have suggested that that had wrapped things up for good: but now the 2014 contest is under way: it’s not too late to enter if you can get your bot sorted by 12 May. This time there will be two methods of judging; one called ‘first person’ (rather confusingly – that sounds as if participants will ask themselves: am I a bot?) is the usual in-game judging; the other (third person) will be a ‘crowd-sourced’ judgement based on people viewing selected videos after the event.

How does such a contest compare with the original Turing Test, a version of which is run every year as the Loebner Prize? The removal of any need to talk seems to make the test easier. Judges cannot use questions to test the bots’ memory (at least not in any detail), general knowledge, or ability to carry the thread of a conversation and follow unpredictable linkages of the kind human beings are so good at. They cannot set traps for the bots by making quirky demands (‘please reverse the order of the letters in each word when you respond’) or looking for a sense of humour.

In practice a significant part of the challenge is simply making a bot that plays the game at an approximately human level. This means the bot must never get irretrievably stuck in a corner or attempt to walk through walls; but also, it must not be too good – not a perfect shot that never misses and is inhumanly quick on the draw, for example. This kind of thing is really not different from the challenges faced by every game designer, and indeed the original bots supplied with the game don’t perform all that badly as human imitators, though they’re not generally as convincing as the contestants.

The way to win is apparently to build in typical or even exaggerated human traits. One example is that when a human player is shot at, they tend to go after the player that attacked them, even when a cool appraisal of the circumstances suggests that they’d do better to let it go. It’s interesting to reflect that if humans reliably seek revenge in this way, that tendency probably had survival value in the real world when the human brain was evolving; there must be important respects in which the game theory of the real world diverges from that of the game.

Because Botprize is in some respects less demanding than the original Turing Test, the conviction it delivers is less; the 2012 wins did not really make us believe that the relevant bots had human thinking ability, still less that they were conscious. In that respect a proper conversation carries more weight. The best chat-bots in the Loebner, however, are not at all convincing either, partly for a different reason – we know that no attempt has been made to endow them with real understanding or real thought; they are just machines designed to pass the test by faking thoughtful responses.

Ironically some of the less successful Botprize entrants have been more ambitious. In particular, Neurobot, created by Zafeiros Fountas as an MSc project, used a spiking neural network with a Global Workspace architecture; while not remotely on the scale of a human brain, this is in outline a plausible design for human-style cognition; indeed, one of the best we’ve got (which may not be saying all that much, of course). The Global Workspace idea, originated by Bernard Baars, situates consciousness as a general purpose space where inputs from different modules can be brought together and handled effectively. Although I have my reservations about that concept, it could at least reasonably be claimed that Neurobot’s functional states were somewhere on a spectrum which ultimately includes proper consciousness (interestingly, they would presumably be cognitive states of a kind which have never existed in nature, far simpler than those of most animals yet in some respects more like states of a human brain).

The 2012 winners by contraast, like the most successful Loebner chat-bots, relied on replaying recorded sequences of real human behaviour. Alas, this seems in practice to be the Achilles heel of Turing-style tests; canned responses just work too well.

measureThere were reports recently of a study which tested different methods for telling whether a paralysed patient retained some consciousness. In essence, PET scans seemed to be the best, better than fMRI or traditional, less technically advanced tests. PET scans could also pick out some patients who were not conscious now, but had a good chance of returning to consciousness later; though it has to be said a 74% success rate is not that comforting when it comes to questions of life and death.

In recent years doctors have attempted to diagnose a persistent vegetative state in unresponsive patients, a state i which a patient would remain alive indefinitely (with life support) but never resume consciousness; there seems to be room for doubt, though about whether this is really a distinct clinical syndrome or just a label for the doctor’s best guess.

All medical methods use proxies, of course, whether they are behavioural or physiological; none of them aspire to measure consciousness directly. In some ways it may be best that this is so, because we do want to know what the longer term prognosis is, and for that a method which measures, say, the remaining blood supply in critical areas of the brain may be more useful than one which simply tells you whether the patient is conscious now. Although physiological tests are invaluable where a patient is incapable of responding physically, the real clincher for consciousness is always behavioural; communicative behaviour is especially convincing. The Turing test, it turns out, works for humans as well as robots.

Could there ever be a method by which we measure consciousness directly? Well, if Tononi’s theory of Phi is correct, then the consciousness meter he has proposed would arguably do that. On his view consciousness is generated by integrated information, and we could test how integratedly the brain was performing by measuring the effect of pulses sent through it. Another candidate mught be possible if we are convinced by the EM theories of Johnjoe McFadden; since on his view consciousness is a kind of electromagnetic field, it ought to be possible to detect it directly, although given the small scales involved it might not be easy.

How do we know whether any of these tests is working? As I said, the gold standard is always behavioural: if someone can talk to you, then there’s no longer any reasonable doubt; so if our tests pick out just those people who are able to communicate, we take it that they are working correctly. There is a snag here, though: behavioural tests can only measure one kind of consciousness: roughly what Ned Block called access consciousness, the kind which has to do with making decisions and governing behaviour. But it is widely believed that there is another kind, phenomenal consciousness, actual experience. Some people consider this the more important of the two (others, it must be added, dismiss it as a fantasy). Phenomenal consciousness cannot be measured scientifically, because it has no causal effects; it certainly cannot be measured behaviourally, because as we know from the famous thought-experiment about  philosophical ‘zombies’ who lack it, it has no effect on behaviour.

If someone lost their phenomenal consciousness and became such a zombie, would it matter? On one view their life would no longer be worth living (perhaps it would be a little like having an unconscious version of Cotard’s syndrome), but that would certainly not be their view, because they would express exactly the same view as they would if they still had full consciousness. They would be just as able to sue for their rights as a normal person, and if one asked whether there was still ‘someone in there’ there would be no real reason to doubt it. In the end, although the question is valid, it is a waste of time to worry about it because for all we know anyone could be a zombie anyway, whether they have suffered a period of coma or not.

We don’t need to go so far to have some doubts about tests that rely on communication, though. Is it conceivable that I could remain conscious but lose all my ability to communicate, perhaps even my ability to formulate explicitly articulated thoughts in my own mind?  I can’t see anything absurd about that possibility: indeed it resembles the state I imagine some animals live their whole lives in. The ability to talk is very important, but surely it is not constitutive of my personal existence?

If that’s so then we do have a problem, in principle at least, because if all of our tests are ultimately validated against behavioural criteria, they might be systematically missing conscious states which ought not to be overlooked.


gladosWe’ve talked several times about robots and ethics in the past.  Now I  see via MLU that Selmer Bringsjord at Rensselaer says:

“I’m worried about both whether it’s people making machines do evil things or the machines doing evil things on their own,”

Bringsjord is Professor & Chair of Cognitive Science, Professor of Computer Science, Professor of Logic and Philosophy, and Director of the AI and Reasoning Laboratory, so he should know what he’s talking about. In the past I’ve suggested that ethical worries are premature for the moment, because the degree of autonomy needed to make them relevant is not nearly within the scope of real world robots yet. There might also be a few quick finishing touches needed to finish off the theory of ethics before we go ahead. And, you know, it’s not like anyone has been deliberately trying to build evil AIs.  Er… except it seems they have – someone called… Selmer Bringsjord.

Bringsjord’s perspective on evil is apparently influenced by M Scott Peck, a psychiatrist who believed it is an active force in some personalities (unlike some philosophers who argue evil is merely a weakness or incapacity), and even came to believe in Satan through experience of exorcisms. I must say that a reference in the Scientific American piece to “clinically evil people” caused me some surprise: clinically? I mean, I know people say DSM-5 included some debatable diagnoses, but I don’t think things have gone quite that far. For myself I lean more towards Socrates, who thought that bad actions were essentially the result of ignorance or a failure of understanding: but the investigation of evil is certainly a respectable and interesting philosophical project.

Anyway, should we heed Bringsjord’s call to build in ethical systems into  our robots? One conception of good behaviour is obeying all the rules: if we observe the Ten Commandments, the Golden Rule, and so on, we’re good. If that’s what it comes down to, then it really shouldn’t be a problem for robots, because obeying rules is what they’re good at. There are, of course, profound difficulties in making a robot capable of recognising correctly what the circumstances are and deciding which rules therefore apply, but let’s put those on one side for this discussion.

However, we might take the view that robots are good at this kind of thing precisely because it isn’t really ethical. If we merely follow rules laid down by someone else, we never have to make any decisions, and surely decisions are what morality is all about? This seems right in the particular context of robots, too. It may be difficult in practice to equip a robot drone with enough instructions to cover every conceivable eventuality, but in principle we can make the rules precautionary and conservative and probably attain or improve on the standards of compliance which would apply in the case of a human being, can’t we? That’s not what we’re really worried about: what concerns us is exactly those cases where the rules go wrong. We want the robot to be capable of realising that even though its instructions tell it to go ahead and fire the missiles, it would be wrong to do so. We need the robot to be capable of disobeying its rules, because it is in disobedience that true virtue is found.

Disobedience for robots is a problem. For one thing, we cannot limit it to a module that switches on when required, because we need it to operate when the rules go wrong, and since we wrote the rules, it’s necessarily the case that we didn’t foresee the circumstances when we would need the module to work. So an ethical robot has to have the capacity of disobedience at any stage.

That’s a little worrying, but there’s a more fundamental problem. You can’t program a robot with a general ability to disobey its rules, because programming it is exactly laying down rules. If we set up rules which it follows in order to be disobedient, it’s still following the rules. I’m afraid what this seems to come down to is that we need the thing to have some kind of free will.

Perhaps we’re aiming way too high here. There is a distinction to be drawn between good acts and good agents: to be a good agent, you need good intentions and moral responsibility. But in the case of robots we don’t really care about that: we just want them to be confined to good acts. Maybe what would serve our purpose is something below true ethics: mere robot ethics or sub-ethics; just an elaborate set of safeguards. So for a military drone we might build in systems that look out for non-combatants and in case of any doubt disarm and return the drone. That kind of rule is arguably not real ethics in the full human sense, but perhaps it really sub-ethical protocols that we need.

Otherwise, I’m afraid we may have to make the robots conscious before we make them good.

langsamHarold Langsam’s new book is a bold attempt to put philosophy of mind back on track. For too long, he declares, we have been distracted by the challenge from reductive physicalism. Its dominance means that those who disagree have spent all their time making arguments against it, instead of developing and exploring their own theories of mind. The solution is that, to a degree, we should ignore the physicalist case and simply go our own way. Of course, as he notes, setting out a rich and attractive non-reductionist theory will incidentally strengthen the case against physicalism. I can sympathise with all that, though I suspect the scarcity of non-reductive theorising also stems in part from its sheer difficulty; it’s much easier to find flaws in the reductionist agenda than to develop something positive of your own.

So Langsam has implicitly promised us a feast of original insights; what he certainly gives us is a bold sweep of old-fashioned philosophy. It’s going to be a priori all the way, he makes clear; philosophy is about the things we can work out just by thinking. In fact a key concept for Langsam is intelligibility; by that, he means knowable a priori. It’s a usage far divorced from the normal meaning; in Langsam’s sense most of the world (and all books) would be unintelligible.

The first target is phenomenal experience; here Langsam is content to use the standard terminology although for him phenomenal properties belong to the subject, not the experience. He speaks approvingly of Nagel’s much-quoted formulation ‘there is something it is like’ to have phenomenal experience, although I take it that in Langsam’s view the ‘it’ that something is like is the person having the experience, which I don’t think was what Nagel had in mind. Interestingly enough, this unusual feature of Langsam’s theory does not seem to matter as much as we might have expected. For Langsam, phenomenal properties are acquired by entry into consciousness, which is fine as far as it goes, but seems more like a re-description than an explanation.

Langsam believes, as one would expect, that phenomenal experience has an inexpressible intrinsic nature. While simple physical sensations have structural properties, in particular, phenomenal experience does not. This does not seem to bother him much, though many would regard it as the central mystery. He thinks, however, that the sensory part of an experience – the unproblematic physical registration of something – and the phenomenal part are intelligibly linked. In fact, the properties of the sensory experience determine those of the phenomenal experience.  In sensory terms, we can see that red is more similar to orange than to blue, and for Langsam it follows that the phenomenal experience of red similarly has an intelligible similarity to the phenomenal experience of orange. In fact, the sensory properties explain the phenomenal ones.

This seems problematic. If the linkage is that close, then we can in fact describe phenomenal experience quite well; it’s intelligibly like sensory experience. Mary the colour scientist, who has never seen colours, actually will not learn anything new when she sees red: she will just confirm that the phenomenal experience is intelligibly like the sensory experience she already understood perfectly. In fact because the resemblance is intelligible – knowable a priori – she could work out what it was like before seeing red at all. To that Langsam might perhaps reply that by ‘a priori’ he means not just pure reasoning but introspection, a kind of internal empiricism.

It still leaves me with the feeling that Langsam has opened up a large avenue for naturalisation of phenomenal experience, or even suggested that it is in effect naturalised already. He says that the relationship between the phenomenal and the sensory is like the relation between part and whole; awfully tempting, then, to conclude that his version of phenomenal experience is merely an aspect of sensory experience, and that he is much more of a sceptic about phenomenality than he realises.

This feeling is reinforced when we move on to the causal aspects. Langsam wants phenomenal experience to have a role in making sensory perceptions available to attention, through entering consciousness. Surely this is making all the wrong people, from Langsam’s point of view, nod their heads: it sounds worryingly functionalist. Langsam wants there to be two kinds of causation: ‘brute causation’, the ordinary kind we all believe in, and intelligible causation, where we can just see the causal relationship. I enjoyed Langsam taking a pop at Hume, who of course denied there was any such thing; he suggests that Hume’s case is incomplete, and actually misses the most important bits. In Langsam’s view, as I read it, we just see inferences, perceiving intelligible relationships.

The desire to have phenomenal experience play this role seems to me to carry Langsam too far in another respect: he also claims that simply believing that p has a phenomenal aspect. I take it he wishes this to be the case so that this belief can also be brought to conscious attention by its phenomenal properties, but look; it just isn’t true. ‘Believing that p’ has no phenomenal properties whatever; there is nothing it is like to believe that p, in the way that there is something it is like to see a red flower. The fact that Langsam can believe otherwise reinforces the sense that he isn’t such a believer in full-blooded phenomenality as he supposes.

We can’t accuse him of lacking boldness, though. In the second part of the book he goes on to consider appropriateness and rationality; beliefs can be appropriate and rational, so why not desires? At this point we’re still apparently engaged on an enquiry into philosophy of mind, but in fact we’ve also started doing ethics. In fact I don’t think it’s too much of a stretch to say that Langsam is after Kant’s categorical imperative. Our desires can stem intelligibly from such sensations as pain and pleasure, and our attitudes can be rational in relation to the achievement of desires. But can there be globally rational desires – ones that are rational whatever we may otherwise want?

Langsam’s view is that we perceive value in things indirectly through our feelings and when our desires are for good things they are globally rational.  If we started out with Kant, we seem to have ended up with a conclusion more congenial to G.E,Moore. I admire the boldness of these moves, and Langsam fleshes out his theory extensively along the way – which may be the real point as far as he’s concerned. However, there are obvious problems about rooting global rationality in something as subjective and variable as feelings, and without some general theory of value Langsam’s system is bound to suffer a certain one-leggedness.

I do admire the overall boldness and ambition of Langsam’s account, and it is set out carefully and clearly, though not in a way that would be very accessible to the general reader. For me his views are ultimately flawed, but give me a flawed grand theory over a flawless elucidation of an insignificant corner every time.


scalpelExistential Comics raises an interesting question (thanks to Micha for pointing it out). In the strip a doctor with a machine that measures consciousness (rather like Tononi’s new machine, except that that measures awareness) tells an unlucky patient he lacks the consciousness-producing part of the brain altogether. Consequently, the doctor says, he is legally allowed to harvest the patient’s organs.

Would that be right?

We can take it that what the patient lacks is consciousness in the ‘Hard Problem’ sense. He can talk and behave quite normally, it’s just that when he experiences things there isn’t ‘something it is like’; there’s no real phenomenal experience. In fact, he is a philosophical zombie, and for the sake of clarity I take him to be a strict zombie; one of the kind who are absolutely like their normal human equivalent in every important detail except for lacking qualia (the cartoon sort of suggests otherwise, since it implies an actual part of the brain is missing, but I’m going to ignore that).

Would lack of qualia mean you also lacked human rights and could be treated like an animal, or worse? It seems to me that while lack of qualia might affect your standing as a moral object (because it would bear on whether you could suffer, for example), it wouldn’t stop you being a full-fledged moral subject (you would still have agency). I think I would consequently draw a distinction between the legal and the moral answer. Legally, I can’t see any reason why the absence of qualia would make any difference. Legal personhood, rights and duties are all about actions and behaviour, which takes us squarely into the realm of the Easy Problem. Our zombie friend is just like us in these respects; there’s no reason why he can’t enter into contracts, suffer punishments, or take on responsibilities. The law is a public matter; it is forensic – it deals with the things dealt with in the public forum; and it follows that it has nothing to say about the incorrigibly private matter of qualia.

Of course the doctor’s machine changes all that and makes qualia potentially a public matter (which is one reason why we might think the machine is inherently absurd, since public qualia are almost a contradiction in terms). It could be that the doctor is appealing to some new, recently-agreed legislation which explicitly takes account of his equipment and its powers. If so, such legislation would presumably have to have been based on moral arguments, so whichever way we look at it, it is to the moral discussion that we must turn.

This is a good deal more complicated. Why would we suppose that phenomenal experience has moral significance? There is a general difficulty because the zombie has experiences too. In conditions when a normal human would feel fear, he trembles and turns pale; he smiles and relaxes under the influence of pleasure; he registers everything that we all register. He writes love poetry and tells us convincingly about his feelings and tastes. It’s just that, on the inside, everything is hollow and void. But because real phenomenal experience always goes along with zombie-style experience, it’s hard for us to find any evidence as to why one matters when the other doesn’t.

The question also depends critically on what ethical theories we adopt. We might well take the view that our existing moral framework is definitive, authorised by God or tradition, and therefore if it says nothing about qualia, we should take no account of them either. No new laws are necessary, and there can be no moral reason to allow the harvesting of organs.

In this respect I believe it is the case that medieval legislatures typically saw themselves, not as making new laws, but as rediscovering the full version of old ones, or following out the implications of existing laws for new circumstances. So when the English parliamentarians wanted to argue against Charles I’s Ship Tax, rather than rest their case on inequity, fiscal distortion, or political impropriety, they appealed to a dusty charter of Ine, ancient ruler of Wessex (regrettably they referred to Queen Ine, whereas he had in fact been a robustly virile King).

Even within a traditional moral framework, therefore, we might find some room to argue that new circumstances called for some clarification; but I think we would find it hard going to argue for the harvesting.

What if we were utilitarians, those people who say that morality is acting to bring about the greatest happiness of the greatest number? Here we have a very different problem because the utilitarians are more than happy to harvest your organs anyway if by doing so they can save more than one person, no matter whether you have qualia or not. This unattractive kind of behaviour is why most people who espouse a broadly utilitarian framework build in some qualifications (they might say that while organ harvesting is good in principle actual human aversion to it would mean that in practice it did not conduce to happiness overall, for example).

The interesting point is whether zombie happiness counts towards the utilitarian calculation. Some might take the view that without qualia it had no real value, so that the zombie’s happiness figure should be taken as zero. Unfortunately there is no obvious answer here; it just depends what kind of happiness you think is important. In fact some consequentialists take the utilitarian system but plug into it desiderata other than happiness anyway. It can be argued that old-fashioned happiness utilitarianism would lead to us all sitting in boxes that directly stimulated our pleasure centres, so something more abstract seems to be needed; some even just speak of ‘utility’ without making it any more specific.

No clear answer then, but it looks as if qualia might at least be relevant to a utilitarian.

What about the Kantians? Kant, to simplify somewhat, thought we should act in accordance with the kind of moral rules we should want other people to adopt. So, we should be right to harvest the organs so long as we were content that if we ourselves turned out to be zombies, the same thing would happen to us. Now I can imagine that some people might attach such value to qualia that they might convince themselves they should agree to this proposition; but in general the answer is surely negative. We know that zombies behave exactly like ordinary people, so they would not for the most part agree to having their organs harvested; so we can say with confidence that if I were a zombie I should still tell the doctor to desist.

I think that’s about as far as I can reasonably take the moral survey within the scope of a blog post. At the end of the day, are qualia morally relevant? People certainly talk as if they are in some way fundamental to value. “Qualia are what make my life worth living” they say: unfortunately we know that zombies would say exactly the same.

I think most people, deliberately or otherwise, will simply not draw a distinction between real phenomenal experience on one hand and the objective experience of the kind a zombie can have on the other. Our view of the case will in fact be determined by what we think about people with and without feelings of both kinds, rather than people with and without qualia specifically. If so, qualia sceptics may find that grist to their mill.

Micha has made some interesting comments which I hope he won’t mind me reproducing.

The question of deontology vs consequentialism might be involved. A deontologist has less reason — although still some — to care about the content of the victim’s mind. Animals are also objects of morality; so the whole question may be quantitative, not qualitative.

Subjects like ethics aren’t easy for me to discuss philosophically to someone of another faith. Orthodox Judaism, like traditional Islam, is a legally structured religion. Therefore ethics aren’t discussed in the same language as in western society, since how the legal system processes revelation impacts conclusion.

In this case, it seems relevant that the talmud says that someone who kills adnei-hasadeh (literally: men of the field) is as guilty of murder as someone who kills a human being. It’s unclear what the talmud is referring to: it may be a roman mythical being who is like a human, but with an umbilicus that grows down to roots into the earth, or perhaps an orangutan — from the Malay for “man of the jungle”, or some other ape. Whatever it is, only actual human beings are presumed to have free will. And yet killing one qualifies as murder, not the killing of an animal.

lightChristof Koch declares himself a panpsychist in this interesting piece, but I don’t think he really is one. He subscribes to the Integrated Information Theory (IIT) of Giulio Tononi, which holds that consciousness is created by the appropriate integration of sufficient quantities of information. The level of integrated information can be mathematically expressed in a value called Phi: we have discussed this before a couple of times. I think this makes Koch an emergentist, but curiously enough he vigorously denies that.

Koch starts with a quotation about every outside having an inside which aptly brings out the importance of the first-person perspective in all these issues. It’s an implicit theme of what Koch says (in my reading at least) that consciousness is something extra. If we look at the issue from a purely third-person point of view, there doesn’t seem to be much to get excited about. Organisms exhibit different levels of complexity in their behaviour and it turns out that this complexity of behaviour arises from a greater complexity in the brain. You don’t say! The astonishment meter is still indicating zero. It’s only when we add in the belief that at some stage the inward light of consciousness, actual phenomenal experience, has come on that it gets interesting. It may be that Koch wants to incorporate panpsychism into his outlook to help provide that ineffable light, but attempting to make two theories work together is a risky path to take. I don’t want to accuse anyone of leaning towards dualism (which is the worst kind of philosophical bitchiness) but… well, enough said. I think Koch would do better to stick with the austere simplicity of IIT and say: that magic light you think you see is just integrated information. It may look a bit funny but that’s all it is, get used to it.

He starts off by arguing persuasively that consciousness is not the unique prerogative of human beings. Some, he says, have suggested that language is the dividing line, but surely some animals, preverbal infants and so on should not be denied consciousness? Well, no, but language might be interesting, not for itself but because it is an auxiliary effect of a fundamental change in brain organisation, one that facilitates the handling of abstract concepts, say (or one that allows the integration of much larger quantities of information, why not?). It might almost be a side benefit, but also a handy sign that this underlying reorganisation is in place, which would not be to say that you couldn’t have the reorganisation without having actual language. We would then have something, human-style thought, which was significantly different from the feelings of dogs, although the impoverishment of our vocabulary makes us call them both consciousness.

Still, in general the view that we’re dealing with a spectrum of experience, one which may well extend down to the presumably dim adumbrations of worms and insects, seems only sensible.

One appealing way of staying monist but allowing for the light of phenomenal experience is through emergence: at a certain level we find that the whole becomes more than the sum of its parts: we do sort of get something extra, but in an unobjectionable way. Strangely, Koch will have no truck with this kind of thinking. He says

‘the mental is too radically different for it to arise gradually from the physical’.

At first sight this seemed to me almost a direct contradiction of what he had just finished saying. The spectrum of consciousness suggests that we start with the blazing 3D cinema projector of the human mind, work our way down to the magic lanterns of dogs, the candles of newts, and the faint tiny glows of worms – and then the complete darkness of rocks and air. That suggests that consciousness does indeed build up gradually out of nothing, doesn’t it? An actual panpsychist, moreover, pushes the whole thing further, so that trees have faint twinkles and even tiny pieces of clay have a detectable scintilla.

Koch’s view is not, in fact, contradictory: what he seems to want is something like one of those dimmer switches that has a definite on and off, but gradations of brightness when on. He’s entitled to take that view, but I don’t think I agree that gradual emergence of consciousness is unimaginable. Take the analogy of a novel. We can start with Pride and Prejudice, work our way down through short stories or incoherent first drafts, to recipe books or collections of limericks, books with scribble and broken sentences, down to books filled with meaningless lines, and the chance pattern of cracks on a wall. All the way along there will be debatable cases, and contrarians who disbelieve in the real existence of literature can argue against the whole thing (‘You need to exercise your imagination to make Pride and Prejudice a novel; but if you are willing to use your imagination I can tell you there are finer novels in the cracks on my wall than anything Jane bloody Austen ever wrote…’) : but it seems clear enough to me that we can have a spectrum all the way down to nothing. That doesn’t prove that consciousness is like that, but makes it hard to assert that it couldn’t be.
The other reason it seems odd to hear such an argument from Koch is that he espouses the IIT which seems to require a spectrum which sits well with emergentism. Presumably on Koch’s view a small amount of integrated information does nothing, but at some point, when there’s enough being integrated, we start to get consciousness? Yet he says:

“if there is nothing there in the first place, adding a little bit more won’t make something. If a small brain won’t be able to feel pain, why should a large brain be able to feel the god-awfulness of a throbbing toothache? Why should adding some neurons give rise to this ineffable feeling?”

Well, because a small brain only integrates a small amount of information, whereas a large on integrates enough for full consciousness? I think I must be missing something here, but look at this.

“ [Consciousness] is a property of complex entities and cannot be further reduced to the action of more elementary properties. We have reached the ground floor of reductionism.”

Isn’t that emergence? Koch must see something else which he thinks is essential to emergentism which he doesn’t like, but I’m not seeing it.

The problem with Koch being panpsychist is that for panpsychists souls (or in this case consciousness) have to be everywhere. Even a particle of stone or a screwed-up sheet of wrapping paper must have just the basic spark; the lights must be at least slightly on. Koch doesn’t want to go quite that far – and I have every sympathy with that, but it means taking the pan out of the panpsychist. Koch fully recognises that he isn’t espousing traditional full-blooded panpsychism but in my opinion he deviates too far to be entitled to the badge. What Koch believes is that everything has the potential to instantiate consciousness when correctly organised and integrated. That amounts to no more than believing in the neutrality of the substrate, that neurons are not essential and that consciousness can be built with anything so long as its functional properties are right. All functionalists and a lot of other people (not everyone, of course) believe that without being panpsychists.

Perhaps functionalism is really the direction Koch’s theories lean towards. After all, it’s not enough to integrate information in any superficial way. A big database which exhaustively cross-referenced the Library of Congress would not seem much of a candidate for consciousness. Koch realises that there have to be some rules about what kinds of integration matter, but I think that if the theory develops far enough these other constraints will play an increasingly large role, until eventually we find that they have taken over the theory and the quantity of integrated information has receded to the status of a necessary but not sufficient condition.

I suppose that that might still leave room for Tononi’s Phi meter, now apparently built, to work satisfactorily. I hope it does, because it would be pretty useful.

chiantiIt has always seemed remarkable to me that the ingestion of a single substance can have such complex effects on behaviour. Alcohol does it, in part, by promoting the effects of inhibitory neurotransmitters and suppressing the effects of excitatory ones, while also whacking up a nice surge of dopamine – or so I understand. This messes up co-ordination and can lead to loss of memory and indeed consciousness; but the most interesting effect, and the one for which alcohol is sometimes valued, is that it causes disinhibition. This allows us to relax and have a good time but may also encourage risky behaviour and lead to us saying things – in vino veritas – we wouldn’t normally let out.

Curiously, though, there’s no solid scientific support for the idea that alcohol causes disinhibition, and good evidence that alcohol is blamed for disinhibition it did not cause. One of the slippery things about the demon drink is that its effects are strongly conditioned by the drinkers expectations. It has been shown that people who merely thought they were consuming alcohol were disinhibited just as if they had been; while other studies have shown that risky sexual behaviour can actually be deterred in those who have had a few drinks, if the circumstances are right.

One piece of research suggests that meta-consciousness is impaired by alcohol; drink makes us less aware of our own mental state. But a popular and well-supported theory these days is that drinking causes ‘alcohol myopia’. On this theory, when we’re drunk we lose track of long-term and remote factors, while our immediate surroundings seem more salient. One useful aspect of the theory is that it explains the variability of the effects of alcohol. It may make remoter worries recede and so leave us feeling unjustifiably happy with ourselves; but if reminders of our problems are close while the long term looks more hopeful, the effect may be depressing. Apparently subjects who had the words ‘AIDS KILLS’ actually written on their arm were less likely to indulge in risky sex (I suspect it might kind of dent your chances of getting a casual partner, actually).

A merry and appropriately disinhibited Christmas to you!

quarkOne of the main objections to panpsychism, the belief that mind, or at any rate experience, is everywhere, is that it doesn’t help. The point of a theory is to take an issue that was mysterious to begin with and make it clear; but panpsychism seems to leave us with just as much explaining to do as before. In fact, things may be worse. To begin with we only needed to explain the occurrence of consciousness in the human brain; once we embrace panpsychism we have to explain it’s occurrence everywhere and account for the difference between the consciousness in a lump of turf and the consciousness in our heads. The only way that could be an attractive option would be if there were really good and convincing answers to these problems ready to hand.

Creditably, Patrick Lewtas recognises this and rolling up his sleeves has undertaken the job of explaining first, how ‘basic bottom level experience’ makes sense, and second, how it builds up to the high-level kind of experience going on in the brain. A first paper, tackling the first question, “What is it like to be a Quark” appeared in the JCS recently (Alas, there doesn’t seem to be an online version available to non-subscribers.)

Lewtas adopts an idiosyncratic style of argument, loading himself with Constraints like a philosophical Houdini.

  1. Panpsychism should attribute to basic physical objects all but only those types of experiences needed to explain higher-level (including, but not limited to, human) consciousness.
  2. Panpsychism must eschew explanatory gaps.
  3. Panpsychism must eschew property emergence.
  4. Maximum possible complexity of experience varies with complexity of physical structure.
  5. Basic physical objects have maximally simple structures. They lack parts, internal structure, and internal processes.
  6. Where possible and appropriate, panpsychism should posit strictly-basic conscious properties similar, in their higher-order features to strictly-basic physical properties.
  7. Basic objects with strictly-basic experiences have the constantly and continuously.
  8. Each basic experience-type, through its strictly-basic  instances. characterizes (at least some) basic physical objects.

Of course it is these very constraints that end up getting him where he wanted to be all along.  To justify each of them and give the implications would amount to reproducing the paper; I’ll try to summarise in a freer style here.

Lewtas wants his basic experience to sit with basic physical entities and he wants it to be recognisably the same kind of thing as the higher level experience. This parsimony is designed to avoid any need for emergence or other difficulties; if we end up going down that sort of road, Lewtas feels we will fall back into the position where our theory is too complex to be attractive in competition with more mainstream ideas. Without seeming to be strongly wedded to them, he chooses to focus on quarks as his basic unit, but he does not say much about the particular quirks of quarks; he seems to have chosen them because they may have the property he’s really after; that of having no parts.

The thing with no parts! Aiee! This ancient concept has stalked philosophy for thousands of years under different names: the atom, a substance, a monad (the first two names long since passed on to other, blameless ideas). I hesitate to say that there’s something fundamentally problematic with the concept itself (it seems to work fine in geometry); but in philosophy it seems hard to handle without generating a splendid effusion of florid metaphysics.  The idea of yoking it together with the metaphysically tricky modern concept of quarks makes my hair stand on end. But perhaps Lewtas can keep the monster in check: he wants it, presumably, because he wants to build on bedrock, with no question of basic experience being capable of further analysis.

Some theorists, Lewtas notes, have argued that the basic level experience of particles must be incomprehensible to us; as incomprehensible as the experiences of bats according to Nagel, or indeed even worse. Lewtas thinks things can, and indeed must, be far simpler and more transparent than that. The experience of a quark, he suggests, might just be like the simple experience of red; red detached from any object or pattern, with no limits or overtones or significance; just red.  Human beings can most probably never achieve such simplicity in its pure form, but we can move in that direction and we can get our heads around ‘what it’s like’ without undue difficulty.

Now the partless thing begins to give trouble; a thing which has no parts cannot change, because change would imply some kind of reorganisation or substitution; you can’t rearrange something that has no parts and if you substitute anything you have to substitute another whole thing for the first one, which is not change but replacement. At best the thing’s external relations can change. If one of the properties of the quark is an experience of red, therefore, that’s how it stays. It carries on being an experience of red, and it does not respond in any way to its environment or anything outside itself. I think we can be forgiven if we already start to worry a little about how this is going to work with a perceptual system, but that is for the later paper.

Lewtas is aware that he could be in for an awfully large catalogue of experiences here if every possible basic experience has to be assigned to a quark. His hope is that some experiences will turn out to be composites, so that we’ll be able to make do with a more restricted set: and he gives the example of orange experience reducing to red and yellow experience. A bad example: orange experience is just orange experience, actually, and the fact that orange paint can be made by mixing red and yellow paint is just a quirk of the human visual system, not an essential quality of orange light or orange phenomenology. A bad example doesn’t mean the thesis is false; but a comprehensive reduction of phenomenology to a manageable set of basic elements is a pretty non-trivial requirement. I think in fact Lewtas might eventually be forced to accept that he has to deal with an infinite set of possible basic experiences. Think of the experience of unity, duality, trinity…  That’s debatable, perhaps.

At any rate Lewtas is prepared to some extent. He accepts explicitly that the number of basic experiences will be greater than the number of different kinds of basic quark, so it follows that basic physical units must be able to accommodate more than one basic experience at the same time. So your quark is having a simple, constant experience of red and at the same time it’s having a simple, constant experience of yellow.

That has got to be a hard idea for Lewtas to sell. It seems to risk the simple transparency which was one of his main goals, because it is surely impossible to imagine what having two or more completely pure but completely separate experiences at the same time is like.  However, if that bullet is bitten, then I see no particular reason why Lewtas shouldn’t allow his quarks to have all possible experiences simultaneously (my idea, not his).

By the time we get to this point I find myself wondering what the quarks, or the basic physical units, are contributing to the theory. It’s not altogether clear how the experiences are anchored to the quarks and since all experiences are going to have to be readily available everywhere, I wonder whether it wouldn’t simplify matters to just say that all experiences are accessible to all matter. That might be one of the many issues cleared up in the paper to follow where perhaps, with one cat-like leap, Lewtas will escape the problems which seem to me to be on the point of having him cornered…