Derek Parfit, who died recently, in two videos from an old TV series…

Parfit was known for his attempts in Reasons and Persons to gently dilute our sense of self using thought experiments about Star Trek style transporters and turning himself gradually into Greta Garbo. I think that by assuming the brain could in principle be scanned and 3D printed in a fairly simple way, these generally underestimated the fantastic intricacy of the brain and begged questions about the importance of its functional organisation and history; this in turn led Parfit to give too little attention to the possibility that perhaps we really are just one-off physical entities. But Parfit’s arguments have been influential, perhaps partly because in Parfit’s outlook they grounded an attractively empathetic and unselfish moral outlook, making him less worried about himself and more worried about others. They also harmonised well with Buddhist thought, and continue to have a strong appeal to some.

Myself I lean the other way; I think virtue comes from proper pride, and that nothing much can be expected from someone who considers themselves more or less a nonentity to begin with. To me a weaker sense of self could be expected to lead to moral indifference; but the evidence is not at all in my favour so far as Parfit and his followers are concerned.

In fact Parfit went on to mount a strong defence of the idea of objective moral truth in another notable book, On What Matters, where he tried to reconcile a range of ethical theories, including an attempt to bring Kant and consequentialism into agreement. To me this is a congenial project which Parfit approached in a sensible way, but it seems to represent an evolution of his views. Here he wanted to be  a friend to Utilitarianism, brokering a statesmanlike peace with its oldest enemy; in his earlier work he had offered a telling criticism in his ‘Repugnant Conclusion’

The Repugnant Conclusion: For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better, even though its members have lives that are barely worth living.

This is in effect a criticism of utilitarian arithmetic; trillions of just tolerable lives can produce a sum of happiness greater than a few much better ones, yet the idea we should prefer the former is repugnant. I’m not sure this conclusion is necessarily quite as repugnant as Parfit thought. Suppose we have a world where the trillions and the few are together, with the trillions living intolerable lives and just about to die; but the happy few could lift them to survival and a minimally acceptable life if they would descend to the same level; would the elite’s agreement to share really be repugnant?

Actually our feelings about all this are unavoidably contaminated by assumptions about the context. Utilitarianism is a highly abstract doctrine and we assume here that two one-off states of affairs can be compared; but in the real world our practical assessment of future consequences would dominate. We may, for example, feel that the bare survival option would in practice be unstable and eventually lead to everyone dying, while the ‘privileged few’ option has a better chance of building a long-term prosperous future.

Be that as it may, whichever way we read things this seems like a hit against consequentialism. The fact that Parfit still wanted that theory as part of his grand triple theory of ethical grand union probably tells us something about the mild and kindly nature of the man, something that no doubt has contributed to the popularity of his ideas.

OutputMachine learning and neurology; the perfect match?

Of course there is a bit of a connection already in that modern machine learning draws on approaches which were distantly inspired by the way networks of neurons seemed to do their thing. Now though, it’s argued in this interesting piece that machine learning might help us cope with the vast complexity of brain organisation. This complexity puts brain processes beyond human comprehension, it’s suggested, but machine learning might step in and decode things for us.

It seems a neat idea, and a couple of noteworthy projects are mentioned: the ‘atlas’ which mapped words to particular areas of cortex, and an attempt to reconstruct seen faces from fMRI data alone (actually with rather mixed success, it seems). But there are surely a few problems too, as the piece acknowledges.

First, fMRI isn’t nearly good enough. Existing scanning techniques just don’t provide the neuron-by-neuron data that is probably required, and never will. It’s as though the only camera we had was permanently out of focus. Really good processing can do something with dodgy images, but if your lens was rubbish to start with, there are limits to what you can get. This really matters for neurology where it seems very likely that a lot of the important stuff really is in the detail. No matter how good machine learning is, it can’t do a proper job with impressionistic data.

We also don’t have large libraries of results from many different subjects. A lot of studies really just ‘decode’ activity in one context in one individual on one occasion. Now it can be argued that that’s the best we’ll ever be able to do, because brains do not get wired up in identical ways. One of the interesting results alluded to in the piece is that the word ‘poodle’ in the brain ‘lives’ near the word ‘dog’. But it’s hardly possible that there exists a fixed definite location in the brain reserved for the word ‘poodle’. Some people never encounter that concept, and can hardly have pre-allocated space for it. Did Neanderthals have a designated space for thinking about poodles that presumably was never used throughout the history of the species? Some people might learn of ‘poodle’ first as a hairstyle, before knowing its canine origin; others, brought up to hard work in their parent’s busy grooming parlour from an early age, might have as many words for poodle as the eskimos were supposed to have for snow. Isn’t that going to affect the brain location where the word ends up? Moreover, what does it mean to say that the word ‘lives’ in a given place? We see activity in that location when the word is encountered, but how do we tell whether that is a response to the word, the concept of the word, the concept of poodles, poodles, a particular known poodle, or any other of the family of poodle-related mental entities? Maybe these different items crop up in multiple different places?

Still, we’ll never know what can be done if we don’t try. One piquant aspect of this is that we might end up with machines that can understand brains, but can never fully explain them to us, both because the complexity is beyond us and because machine learning often works in inscrutable ways anyway. Maybe we can have a second level of machine that explains the first level machines to us – or a pair of machines that each explain the brain and can also explain each other, but not themselves?

It all opens the way for a new and much more irritating kind of robot. This one follows you around and explains you to people. For some of us, some of the time, that would be quite helpful. But it would need some careful constraints, and the fact that it was basically always right about you could become very annoying. You don’t want a robot that says “nah, he doesn’t really want that, he’s just being polite”, or “actually, he’s just not that into you”, let alone “ignore him; he thinks he understands hermeneutics, but actually what he’s got in mind is a garbled memory of something else about Derrida he read once in a magazine”.

Happy New Year!

scroogeA ghost? Humbug! Yet it was the same face: the very same. Marley in his pigtail, usual waistcoat, tights and boots; the tassels on the latter bristling, like his pigtail, and his coat-skirts, and the hair upon his head. The chain he drew was clasped about his middle. It was long, and wound about him like a tail; and it was made (for Scrooge observed it closely) of cash-boxes, keys, padlocks, ledgers, deeds, and heavy purses wrought in steel. His body was transparent; so that Scrooge, observing him, and looking through his waistcoat, could see the two buttons on his coat behind.

Scrooge had often heard it said that Marley had no bowels, but he had never believed it until now.

No, nor did he believe it even now. Though he looked the phantom through and through, and saw it standing before him; though he felt the chilling influence of its death-cold eyes; and marked the very texture of the folded kerchief bound about its head and chin, which wrapper he had not observed before; he was still incredulous, and fought against his senses.

“You don’t believe in me,” observed the Ghost.

“I don’t,” said Scrooge.

“What evidence would you have of my reality beyond that of your senses?”

“I don’t know,” said Scrooge, “have you read Hume, Jacob? No? You see, to me you’re in the nature of a miracle, something that contradicts all the established understanding of the world. The most parsimonious assumption in such a case, you know, is always that a miraculous event such as your appearance is a delusion.”

“Why do you doubt your senses?”

“Because,” said Scrooge, “a little thing affects them. A slight disorder of the stomach makes them cheats. You may be an undigested bit of beef, a blot of mustard, a crumb of cheese, a fragment of an underdone potato. There’s more of gravy than of grave about you, whatever you are!”

Scrooge was not much in the habit of cracking jokes, nor did he feel, in his heart, by any means waggish then. The truth is, that he tried to be smart, as a means of distracting his own attention, and keeping down his terror; for the spectre’s voice disturbed the very marrow in his bones.

To sit, staring at those fixed glazed eyes, in silence for a moment, would play, Scrooge felt, the very deuce with him. There was something very awful, too, in the spectre’s being provided with an infernal atmosphere of its own. Scrooge could not feel it himself, but this was clearly the case; for though the Ghost sat perfectly motionless, its hair, and skirts, and tassels, were still agitated as by the hot vapour from an oven.

“You see this toothpick?” said Scrooge, returning quickly to the charge, for the reason just assigned; and wishing, though it were only for a second, to divert the vision’s stony gaze from himself.

“I do,” replied the Ghost.

“You are not looking at it,” said Scrooge.

“But I see it,” said the Ghost, “notwithstanding.”

“Well!” returned Scrooge, “I have but to swallow this, and be for the rest of my days persecuted by a legion of goblins, all of my own creation. Humbug, I tell you! humbug!”
“Yet here I am. You see clearly who I once was. I can tell you, if you wish, things that only Jacob Marley could have known: do you doubt it?” said the spirit.

“No. But would those things come from your brain, or from mine? You see, we know now, Jacob, that consciousness is a product of the brain. Have you read Fechner*? No? Well, really, what have you been doing with your evenings, Jacob?”

Scrooge clung desperately to his exposition as the best means of retaining his equanimity in the face of the apparition’s unwavering gaze; at the same time he felt a little pride over his steadfastness. No great city banker, rich and overbearing as he might be, had ever intimidated Scrooge, and he was not about to be cowed by the mere shade of his own dead partner.

“It has been proved not only that consciousness is amenable to scientific investigation, but that it obeys hard mathematical laws; and there’s no manner of doubt that it resides in the brain. Now your brain is dead, Jacob – there’s no question about it – so no possibility of your consciousness persisting arises. Unless we are to be panpsychists, but if that were true, why, I might as well worry about the consciousness of the knocker on the front door!”

“Perhaps you should,” intoned the ghost, unmoved, “I came to effect a moral reformation, but I see I must begin with mereology. You see these ledgers? These frightful deeds chained about me? Why do you suppose I must carry them everywhere?”

“I don’t know… Can it be meant as a punishment, Jacob?” returned Scrooge.

“No; though they are burden enough. These columns of figures, these legal documents, were the tools I used in life to think about my business. They are as much part of my mind as the brain I once had. And though my body is dissolved, they remain, do they not? Is not that part of my mind still growing and flourishing in your counting-house?”

“I keep the books, certainly, Jacob; but that would be a narrow kind of mind…” Scrooge fell silent as he saw the trap he was falling into.

“Narrow indeed, Ebenezer Scrooge!” replied the spirit, “and when did your thoughts last spend a cheerful hour outside the counting house?”

Scrooge looked abashed, but he was thinking quickly.

“You see, spirit,” he resumed, “those account books may retain vestiges of your personality. But ink upon a page is nothing without a brain… very well, then, without a human being, to animate it, to give it significance. Now Cratchit may read those books; or I may. You may not. So if you are brought here tonight by the revivification of those traces, it is by my mind, and you are indeed nothing more than a phantom of my brain, as I said!”

At this the ghost let out a terrible roar.

“Prepare yourself, Ebenezer Scrooge!” it thundered, “You shall be visited by three ghosts of disembodied consciousness! The Ghost of Dualism Past; the Ghost of Algorithms Present; and the Ghost of Uploading Yet to Come! Expect the first at the stroke of midnight.”

“Humbug!” said Scrooge, excitedly, “Double Humbug, I say! And Humbug on stilts!

 

* Scrooge was actually rather lucky to get away with that one. He is, of course, is alluding to Fechner’s Law, which relates subjective sensation to the logarithm of the intensity of the stimulus, hence at the time a shining example of the new empirical psychology (actually rather too new – I don’t think it was published even in German until after A Christmas Carol). Strangely enough, neither Scrooge nor Marley seem aware that Fechner himself believed in a form of panpsychism and had already set out in Das Büchlein vom Leben nach dem Tode (1836) his vision of human life as having three stages: a sleep before birth, normal life in the middle stage, and entry into the general communion of consciousness after death, with the dead still able to exert a helpful influence on the living. He would definitely not have been on Scrooge’s  side in this discussion.

Does language capture reality – or capture us in a cage of our own making?
Hilary Lawson believes we close the world; its rich, polyvalent potentiality is closed down into our limited stock of concepts and finite vocabulary so that our language doesn’t deliver reality to us at all; it lies beyond words. He recognises the difficulty of expounding, in words, the fundamental inadequacy of language.
There’s some truth in that view (and Lawson’s strictures about the limitations of our senses recall the basis of Scott Bakker’s Blind Brain Theory). We surely do see things differently. Imagine a couple viewing houses; they have different priorities and see different things about each house. Later, when they discuss the one with the conservatory it’s not much of a stretch to say that they’re discussing different houses – though Lawson has in mind a more radical problem than that! In fact when you think about it, what he demands is astonishing – that in order to capture reality we have to apprehend the whole totality of it in every aspect. Either we have exhaustive perfect acquaintance with reality – or it’s not reality at all. Why aren’t small chunks of reality real?
Emma Borg, indeed, believes that there is an objective reality and words can tell us about it. Sure, language is a human fantasy, a human construct, but that doesn’t mean it can’t tell about the world. It’s human beings that give meanings to things, in fact. She grants that there are many different perspectives we can take, but ultimately some descriptions of the world just yield better results than others – as we’d have to concede in the case of say, medical diagnosis.
Daniel Everett thinks there are certainly big differences between the way different cultures address the world; in fact he says the idea of being “beyond words” would be hard to articulate in many cultures. Everett is of course famous for his controversial descriptions of the Piraha language, which has no numbers or colours and seems strangely restricted in other interesting ways. People have challenged his research, but no-one else really has anything like Everett’s depth of experience and knowledge of the Piraha. The cultural differences he describes seem to support the idea that words trap us in a “reality” of our own, but he also points out that we develop shared conventions and end up talking like the people we talk with.
Words can certainly be understood differently; who hasn’t picked up a word from hearing it in context, only to discover years later that the dictionary definition is not what we expected  yet years of using the word slightly wrong and therefore not saying quite what we thought we were saying, have passed unnoticed. (It turns out that “strictures” above might not really have been the word I wanted…)
Myself, I don’t think language is primarily about describing the world for our own benefit anyway; it’s more about influencing other people’s thoughts and creating harmonised streams of shared thoughts. It’s a pragmatic game, too, not a formal encoding based on a fixed intellectual structure,; it’s not unlike a game of charades whose players have developed a wonderful set of conventions that let them signal at blazing speed. So I’m really with Borg, I think.

alien-superWe are in danger of being eliminated by aliens who aren’t even conscious, says Susan Schneider. Luckily, I think I see some flaws in the argument.

Humans are probably not the greatest intelligences in the Universe, she suggests; others probably have been going for billions of years longer. Perhaps, but maybe they have all attained enlightenment and moved on from this plane, leaving us young dummies the cleverest or the only people around?

Schneider thinks the older cultures are likely to be post-biological, having moved on into machine forms of intelligence. This transition may only take a few hundred years, she suggests, to ‘judge from the human experience’ (Have we transitioned? Did I miss it?). She says transistors are much faster than neurons and computer power is almost indefinitely expandable, so AI will end up much cleverer than us.

Then there may be a problem over controlling these superlatively bright computers, as foreseen by Stephen Hawking, Elon Musk, and Bill Gates. Bill Gates? The man who, by exploiting the monopoly handed to him by IBM was able to impose on us all the crippled memory management of DOS and the endless vulnerabilities of Windows? Well, OK; not sure he has much idea about technology, but he’s got form on trying to retain control of things.

Schneider more or less takes it for granted that computation is cogitation, and that faster computation means smarter thinking. It’s true that computers have become very good at games we didn’t think they could play at all, and she reminds us of some examples. But to take over from human beings, computers need more than just computation. To mention two things, they need agency and intentionality, and to date they haven’t shown any capacity at all for either. I don’t rule out the possibility of both being generated artificially in future, but the ever-growing ability of computers to do more sums more quickly is strictly irrelevant. Those future artificial people of whom we know nothing may be able to exploit the power of computation – but so can we. If computers are good at winning battles, our computers can fight their computers.

Schneider also takes it for granted that her computational aliens will be hostile and likely to come over and fuck us up good if they ever know we exist. They might, for example, infect our systems with computer viruses (probably not, I think, because without Bill Gates providing their operating systems computer viruses probably remained a purely theoretical matter for them). Sending signals out into the galaxy, she reckons, is a really bad idea; our radio signals are already out there but luckily it’s faint and easily missed (even by unimaginably super-intelligent aliens, it seems). Premature to worry, surely, because even our earliest radio signals can be no more than about a hundred light years away so far – not much of a distance in galactic terms. But why would super-intelligent entities behave like witless bullies anyway? Somewhere between benign and indifferent seems a more likely attitude.

To me this whole scenario seems to embody a selective prognosis anyway. The aliens have overcome the limitation of the speed of light, they feed off black holes (no clue, sorry) but they still run on the computation we currently think is really smart. A hundred years ago no-one would have supposed computation was going to be the dominant technology of our decade, let alone the next million years; maybe by 2116 we’ll look back on it the way we fondly remember steam locomotion.

Schneider’s most arresting thought is that her dangerous computational aliens might lack qualia, and so in that sense not be conscious. It seems to me more natural to suppose that acquiring human-style thought would necessarily involve acquiring human-style qualia. Schneider seems to share the Searlian view that qualia have something to do with unknown biological qualities of neural tissue which silicon can never share. Even if qualia could be engineered into silicon, why would the aliens bother, she asks – it’s just an extra overhead that might add unwanted ethical issues. Most surprisingly, she supposes that we might be able to test the proposition! Suppose that for medical reasons we replace parts of a functioning human brain with chips, we might then find that qualia are lost.

But how would we know? Ex hypothesi, qualia have no causal powers and so could not cause any change in our behaviour. Even if the qualia vanished, the fact could not be reported. None of the things we say about qualia were caused by qualia; that’s one of the bizarre things about them.

Anyway, I say if we’re going to indulge in this kind of wild speculation, let’s really go big; I say the super-intelligent aliens will be powered by hyper-computation, a technology that makes our concept of computation look like counting on your fingers; and they’ll have not only qualia, but hyper-qualia, experiential phenomenologica whose awesomeness we cannot even speak of. They will be inexpressibly kindly and wise and will be be borne to Earth to visit us on special wave-forms, beyond our understanding but hugely hyperbolic…

What is evil? Peter Dews says it’s an idea we’re not comfortable with any more;  after the shock of Nazism, Hannah Arendt thought we’d spend the rest of the century talking about it; but actually very little was said. We are inclined to talk about conspicuous badness as  something that has somehow been wired into some people’s nature; but if it’s wired in, they had no choice and it can’t be really evil…

Simon Baron-Cohen rests on the idea that often what we’re really dealing with is a failure of empathy and explains some of the ways modern research is showing it can fall short through genetics or other issues. Dews raises a good objection; that moral goodness and empathy are clearly distinct. Your empathy with your wife might give you the knowledge you need to be really hurtful, for example. Baron-Cohen has an answer to this particular example in his distinction between cognitive and affective empathy – it’s one thing to understand another person’s feelings and quite another to share them or care about them. But surely there are other ways empathy isn’t quite right? Empathy with wicked people might cause us to help them in their wrong-doing, mightn’t it? Lack of empathy might allow you to be a good dentist…

Rebecca Roache thinks evil is a fuzzy concept but one that is entwined in our moral discourse and one we should be poorer for abandoning.  Describing the Holocaust as ‘very bad’ wouldn’t really do the job.

In my own view, to be evil requires that you understand right and wrong, and choose wrong. This seems impossible, because according to Socrates, anyone who really understands what good is, must want to do it. It has never looked like that in real life, however, where there seem to be plenty of people doing things they know are wrong

Luckily I recently set out in a nutshell the complete and final theory of ethics. In even briefer form: I think we seek to act effectively out of a kind of roughly existentialist self-assertion. We see that general aims serve our purpose better than limited ones and so choose to act categorically on roughly Kantian reasoning. A sort of empty consequentialism provides us with a calculus by which to choose the generally effective over the particular, but unfortunately the values are often impossible to assess in practice. We therefore fall back on moral codes, set of rules we know are likely to give the best results in the long run.

Now, that suggests Socrates was broadly right; doing the right thing just makes sense. But the system is complicated; there are actually several different principles at work at different levels, and this does give rise to real conflicts.

At the lower levels, these conflicts can give rise to the appearance of evil. Different people may, for example, quite properly have slightly different moral codes that either legitimately reflect cultural difference or are matters of mere convention. Irritable people may see the pursuit of a different code from their own as automatically evil. Second, there’s a genuine tension between any code and the consequentialist rationale that supports it. We follow the code because we can’t do the calculus, but every now and then, as in the case of white lies, the utility of breaking the code is practically obvious. People who cling to the code, or people who treat it as flexible, may be seen as evil by those who make different judgements. In fact all these conflicts can be internalised and lead to feelings of guilt and moral doubt; we may even feel a bit bad ourselves.

None of those examples really deserve to be called evil in my view though; that label only applies to higher level problems. The whole thing starts with self-assertion, and some may feel that deliberate wickedness allows them to make a bigger splash. Sure, they may say, I understand that my wrongdoing harms society and thereby indirectly harms my own consequential legacy. But I reckon people will sort of carry things for me; meanwhile I’ll write my name on history far more effectively as a master of wickedness than as a useful clerk. This is a mistake, undoubtedly, but unfortunately the virtuous arguments are rather subtle and unexciting, whereas the false reasoning is Byronic and attractive. I reckon that’s how the deliberate choice of recognised evil sometimes arises.

frankish-illusionConsciousness – it’s all been a terrible mistake. In a really cracking issue of the JCS (possibly the best I’ve read) Keith Frankish sets out and defends the thesis of illusionism, with a splendid array of responses from supporters and others.

How can consciousness be an illusion? Surely an illusion is itself a conscious state – a deceptive one – so that the reality of consciousness is a precondition of anything being an illusion? Illusionism, of course, is not talking about the practical, content-bearing kind of consciousness, but about phenomenal consciousness, qualia, the subjective side, what it is like to see something. Illusionism denies that our experiences have the phenomenal aspect they seem to have; it is in essence a sceptical case about phenomenal experience. It aims to replace the question of what phenomenal experience is, with the question of why people have the illusion of phenomenal experience.

In one way I wonder whether it isn’t better to stick with raw scepticism than frame the whole thing in terms of an illusion. There is a danger that the illusion itself becomes a new topic and inadvertently builds the confusion further. One reason the whole issue is so difficult is that it’s hard to see one’s way through the dense thicket of clarifications thrown up by philosophers, all demanding to be addressed and straightened out. There’s something to be said for the bracing elegance of the two-word formulation of scepticism offered by Dennett (who provides a robustly supportive response to illusionism here, as being the default case) – ‘What qualia?’. Perhaps we should just listen to the ‘tales of the qualophiles’ – there is something it is like, Mary knows something new, I could have a zombie twin – and just say a plain ‘no’ to all of them. If we do that, the champions of phenomenal experience have nothing to offer; all they can do is, as Pete Mandik puts it here, gesture towards phenomenal properties. (My imagination whimpers in fear at being asked to construe the space in which one might gesture towards phenomenal qualities, let alone the ineffable limb with which the movement might be performed; it insists that we fall back on Mandik’s other description; that phenomenalists can only invite an act of inner ostension.)

Eric Schwitzgebel relies on something like this gesturing in his espousal of definition by example as a means of getting the innocent conception of phenomenal experience he wants without embracing the dubious aspects. Mandik amusingly and cogently assails the scepticism of the illusionist case from an even more radical scepticism – meta-illusionism. Sceptics argue that phenomenalism can’t be specified meaningfully (we just circle around a small group of phrases and words that provide a set of synonyms with no definition outside the loop) , but if that’s true how do we even start talking about it? Whereof we cannot speak…

Introspection is certainly the name of the game, and Susan Blackmore has a nifty argument here; perhaps it’s the very act of introspecting that creates the phenomenal qualities? Her delusionism tells us we are wrong to think that there is a continuous stream of conscious experience going on in the absence of introspection, but stops short of outright scepticism about the phenomenal. I’m not sure. William James told us that introspection must be retrospection – we can only mentally examine the thought we just had, not the one we are having now – and it seems odd to me to think that a remembered state could be given a phenomenal aspect after the fact. Easier, surely, to consider that the whole business is consistently illusory?

Philip Goff is perhaps the toughest critic of illusionism; if we weren’t in the grip of scientism, he says, we should have no difficulty in seeing that the causal role of brain activity also has a categorical nature which is the inward, phenomenal aspect. If this view is incoherent or untenable in any way, we’re owed a decent argument as to why.

Myself I think Frankish is broadly on the right track. He sets out three ways we might approach phenomenal experience. One is to accept its reality and look for an explanation that significantly modifies our understanding of the world. Second, we look for an explanation that reconciles it with our current understanding, finding explanations within the world of physics of which we already have a general understanding. Third, we dismiss it as an illusion. I think we could add ‘approach zero’: we accept the reality of phenomenal experience and just regard it as inexplicable. This sounds like mysterianism – but mysterians think the world itself makes sense; we just don’t have the brains to see it. Option zero says there is actual irreducible mystery in the real world. This conclusion is surely thoroughly repugnant to most philosophers, who aspire to clear answers even if they don’t achieve them; but I think it is hard to avoid unless we take the sceptical route. Phenomenal experience is on most mainstream accounts something over and above the physical account just by definition. A physical explanation is automatically ruled out; even if good candidates are put forward, we can always retreat and say that they explain some aspects of experience, but not the ineffable one we are after. I submit that in fact this same strategy of retreat means that there cannot be any satisfactory rational account of phenomenal experience, because it can always be asserted that something ineffable is missing.

I say philosophers will find this repugnant, but I can sense some amiable theologians sidling up to me. Those light-weight would-be scientists can’t deal with mystery and the ineffable, they say, but hey, come with us for a bit…

Regular readers may possibly remember that I think that the phenomenal aspect of experience is actually just its reality; that the particularity or haecceity of real experience is puzzling to those who think that theory must accommodate everything. That reality is itself mysterious in some sense, though: not easily accounted for and not susceptible to satisfactory explanation either by induction or deduction. It may be that to understand that in full we have to give up on these more advanced mental tools and fall back on the basic faculty of recognition, the basis of all our thinking in my view and the capacity of which both deduction and induction are specialised forms. That implies that we might have to stop applying logic and science and just contemplate reality; I suppose that might mean in turn that meditation and the mystic tradition of some religions is not exactly a rejection of philosophy as understood in the West, but a legitimate extension of the same enquiry.

Yeah, but no; I may be irredeemably Western and wedded to scientism, but rightly or wrongly, meditation doesn’t scratch my epistemic itch. Illusionism may not offer quite the right answer, but for me it is definitely asking the right questions.

blackmore-and-churchland

 

 

 

 

 

 

 

 

 

 

 

Susan Blackmore, champion of memes, interviews Patricia Churchland, author of the classic Neurophilosophy.  With Swedish subtitles.

(Sorry, I’m not clever enough to embed this one.)

eyesPhilip Goff tells us that panpsychism is an appealingly simple view. I do think he has captured an important point, and one which makes a real contribution to panpsychism’s otherwise puzzling ability to attract adherents. But although the argument is clear and well-constructed I could hardly agree less.

Even his opening sentence has me shaking my head…

Common sense tells us that only living things have consciousness.

Hm; I’m not altogether sure such questions are really even within the scope of common sense, but popular culture seems to tell us that people are generally happy to assume that robots may be conscious. In fact, I suspect that only our scientific education stops us attributing agency to the weather, stones that trip us up, and almost anything that moves. It isn’t only Basil Fawlty that shouts at his car!

Goff suggests that the main argument against panpsychism (approximately the view that everything everywhere is conscious: I skip here various caveats and clarifications which don’t affect the main argument) is just that it is ‘crazy’ – that it conflicts with common sense. He goes on to rebut this by pointing out that relativity and Darwinism both conflict with common sense too. This seems dangerously close to the classic George Spiggott argument so memorably refuted in the 1967 film Bedazzled;

Stanley Moon: You’re a nutcase! You’re a bleedin’ nutcase!
George Spiggott: They said the same of Jesus Christ, Freud, and Galileo.
Stanley Moon: They said it of a lot of nutcases too.
George Spiggott: You’re not as stupid as you look, are you, Mr. Moon?

But really we’re fighting a straw man; the main argument against panpsychism is surely not a mere appeal to common sense. (Who are these philosophers who stick to common sense and how do they get any work done?) One of the candidates for the main counter-argument must surely be the difficulty of saying exactly which of the teeming multi-layered dynasties of entities in the universe we deem to be conscious, whether composite entities qualify, and if so, how on Earth that works. Another main line of hostile argument must be the problem of explaining how these ubiquitous consciousnesses relate to the ordinary kind that appears to operate in brains. Perhaps the biggest objection of all is to panpsychism’s staggering ontological profligacy. William of Occam told us to use as few angels as possible; panpsychism stations one in every particle of the cosmos.

How could such a massive commitment represent simplicity? The thing is, Goff isn’t starting from nothing; he already has another metaphysical commitment. He believes that things have an intrinsic nature apart from their physical properties. Science, on this view, is all about a world that often, rather mysteriously, gets called the ‘external’ world. It tells us about the objectively measurable properties of things, but nothing at all about the things in themselves. No doubt Goff has reasons for thinking this that he has set out elsewhere, probably in the book of which he helpfully provides an interesting chapter.

But whatever his grounds may be, I think this view is itself hopeless. For one thing, if these intrinsic natures have no physical impact, nothing we ever say or write can have been caused by them. That seems worrying. Ah, but here I’m inadvertently beginning to make Goff’s case for him, because what else is there that never causes any of the things we say about it? Qualia, phenomenal consciousness, the sort Goff is clearly after. Now if we’ve got two things with this slippery acausal quality, might it not be a handy simplification if they were the same thing? This is very much the kind of simplification that Goff wants to suggest. We know or assume that everything has its own intrinsic nature. In one case, ourselves, we know what that intrinsic nature is like; it’s conscious experience. So is it not the simplest way if we suppose that consciousness is the intrinsic nature of everything? Voila.

There’s no denying that that does make some sense. We do indeed get simplicity of a sort – but only at a price. Once we’ve taken on the huge commitment of intrinsic natures, and once we’ve also taken on the commitment of ineffable interior qualia, then it looks like good sense to combine the two commitments into, as it were, one easy payment. But it’s far better to avoid these onerous commitments in the first place.

Let me suggest that for one thing, believing in intrinsic natures poisons the essential concept of identity. Leibniz tells us that the identity of a thing resides in its properties; if all the properties of A are the same as all the properties of B, then A is B. But if everything has an unobservable inner nature as well as its observable properties, its identity is forever unknowable and there can never be certainty that this dagger I see before me is actually the same as the identical-looking one I saw in the same place a moment ago. Its inward nature might have changed.

Moreover, even if we take on both intrinsic natures and ineffable qualia, there are several good reasons to think the two must be different. If we are to put aside my fear that my dagger may have furtively changed its intrinsic nature, it must surely be that the intrinsic nature of a thing generally stays the same – but consciousness constantly changes? In fact, consciousness goes away regularly every night; does our intrinsic nature disappear too? Do sleeping people somehow not have an intrinsic nature – or if they have one, doesn’t it persist when they wake, alongside and evidently distinct from their consciousness? Or consider what consciousness is like: consciousness is consciousness of things; qualia are qualia of red, or middle C, or the smell of bacon; how can entities with no sensory organs have them? Is there a quale of nothing? There might be answers, but I don’t think they’re going to be easy ones.

There’s another problem lurking in wait, too, I think. Goff, I think, assumes that we all exist and have intrinsic natures, but he cannot have any good reason to think so, because intrinsic natures leave no evidence. We who believe that the identity of things is founded in their observable properties have empirical grounds to believe that there are many conscious entities out there. For him the observable physics must be strictly irrelevant. He has immediate knowledge only of one intrinsic nature, his own, which he takes to be his consciousness;  the most parsimonious conclusion to draw from there is not that the universe is full of intrinsic natures and consciousnesses of a similar kind, but that there is precisely one; Goff, the single consciousness that underpins everything. He seems to me, in other words, to have no defence against some kind of solipsism; simplicity makes it most likely that he lives in his own dream, or at best in a world populated by some kind of zombies.

Crazy? Well, it’s a little strange…


Watch the full video with related content here.

The discussion following the presentations I featured last week.