frankish-illusionConsciousness – it’s all been a terrible mistake. In a really cracking issue of the JCS (possibly the best I’ve read) Keith Frankish sets out and defends the thesis of illusionism, with a splendid array of responses from supporters and others.

How can consciousness be an illusion? Surely an illusion is itself a conscious state – a deceptive one – so that the reality of consciousness is a precondition of anything being an illusion? Illusionism, of course, is not talking about the practical, content-bearing kind of consciousness, but about phenomenal consciousness, qualia, the subjective side, what it is like to see something. Illusionism denies that our experiences have the phenomenal aspect they seem to have; it is in essence a sceptical case about phenomenal experience. It aims to replace the question of what phenomenal experience is, with the question of why people have the illusion of phenomenal experience.

In one way I wonder whether it isn’t better to stick with raw scepticism than frame the whole thing in terms of an illusion. There is a danger that the illusion itself becomes a new topic and inadvertently builds the confusion further. One reason the whole issue is so difficult is that it’s hard to see one’s way through the dense thicket of clarifications thrown up by philosophers, all demanding to be addressed and straightened out. There’s something to be said for the bracing elegance of the two-word formulation of scepticism offered by Dennett (who provides a robustly supportive response to illusionism here, as being the default case) – ‘What qualia?’. Perhaps we should just listen to the ‘tales of the qualophiles’ – there is something it is like, Mary knows something new, I could have a zombie twin – and just say a plain ‘no’ to all of them. If we do that, the champions of phenomenal experience have nothing to offer; all they can do is, as Pete Mandik puts it here, gesture towards phenomenal properties. (My imagination whimpers in fear at being asked to construe the space in which one might gesture towards phenomenal qualities, let alone the ineffable limb with which the movement might be performed; it insists that we fall back on Mandik’s other description; that phenomenalists can only invite an act of inner ostension.)

Eric Schwitzgebel relies on something like this gesturing in his espousal of definition by example as a means of getting the innocent conception of phenomenal experience he wants without embracing the dubious aspects. Mandik amusingly and cogently assails the scepticism of the illusionist case from an even more radical scepticism – meta-illusionism. Sceptics argue that phenomenalism can’t be specified meaningfully (we just circle around a small group of phrases and words that provide a set of synonyms with no definition outside the loop) , but if that’s true how do we even start talking about it? Whereof we cannot speak…

Introspection is certainly the name of the game, and Susan Blackmore has a nifty argument here; perhaps it’s the very act of introspecting that creates the phenomenal qualities? Her delusionism tells us we are wrong to think that there is a continuous stream of conscious experience going on in the absence of introspection, but stops short of outright scepticism about the phenomenal. I’m not sure. William James told us that introspection must be retrospection – we can only mentally examine the thought we just had, not the one we are having now – and it seems odd to me to think that a remembered state could be given a phenomenal aspect after the fact. Easier, surely, to consider that the whole business is consistently illusory?

Philip Goff is perhaps the toughest critic of illusionism; if we weren’t in the grip of scientism, he says, we should have no difficulty in seeing that the causal role of brain activity also has a categorical nature which is the inward, phenomenal aspect. If this view is incoherent or untenable in any way, we’re owed a decent argument as to why.

Myself I think Frankish is broadly on the right track. He sets out three ways we might approach phenomenal experience. One is to accept its reality and look for an explanation that significantly modifies our understanding of the world. Second, we look for an explanation that reconciles it with our current understanding, finding explanations within the world of physics of which we already have a general understanding. Third, we dismiss it as an illusion. I think we could add ‘approach zero’: we accept the reality of phenomenal experience and just regard it as inexplicable. This sounds like mysterianism – but mysterians think the world itself makes sense; we just don’t have the brains to see it. Option zero says there is actual irreducible mystery in the real world. This conclusion is surely thoroughly repugnant to most philosophers, who aspire to clear answers even if they don’t achieve them; but I think it is hard to avoid unless we take the sceptical route. Phenomenal experience is on most mainstream accounts something over and above the physical account just by definition. A physical explanation is automatically ruled out; even if good candidates are put forward, we can always retreat and say that they explain some aspects of experience, but not the ineffable one we are after. I submit that in fact this same strategy of retreat means that there cannot be any satisfactory rational account of phenomenal experience, because it can always be asserted that something ineffable is missing.

I say philosophers will find this repugnant, but I can sense some amiable theologians sidling up to me. Those light-weight would-be scientists can’t deal with mystery and the ineffable, they say, but hey, come with us for a bit…

Regular readers may possibly remember that I think that the phenomenal aspect of experience is actually just its reality; that the particularity or haecceity of real experience is puzzling to those who think that theory must accommodate everything. That reality is itself mysterious in some sense, though: not easily accounted for and not susceptible to satisfactory explanation either by induction or deduction. It may be that to understand that in full we have to give up on these more advanced mental tools and fall back on the basic faculty of recognition, the basis of all our thinking in my view and the capacity of which both deduction and induction are specialised forms. That implies that we might have to stop applying logic and science and just contemplate reality; I suppose that might mean in turn that meditation and the mystic tradition of some religions is not exactly a rejection of philosophy as understood in the West, but a legitimate extension of the same enquiry.

Yeah, but no; I may be irredeemably Western and wedded to scientism, but rightly or wrongly, meditation doesn’t scratch my epistemic itch. Illusionism may not offer quite the right answer, but for me it is definitely asking the right questions.

blackmore-and-churchland

 

 

 

 

 

 

 

 

 

 

 

Susan Blackmore, champion of memes, interviews Patricia Churchland, author of the classic Neurophilosophy.  With Swedish subtitles.

(Sorry, I’m not clever enough to embed this one.)

eyesPhilip Goff tells us that panpsychism is an appealingly simple view. I do think he has captured an important point, and one which makes a real contribution to panpsychism’s otherwise puzzling ability to attract adherents. But although the argument is clear and well-constructed I could hardly agree less.

Even his opening sentence has me shaking my head…

Common sense tells us that only living things have consciousness.

Hm; I’m not altogether sure such questions are really even within the scope of common sense, but popular culture seems to tell us that people are generally happy to assume that robots may be conscious. In fact, I suspect that only our scientific education stops us attributing agency to the weather, stones that trip us up, and almost anything that moves. It isn’t only Basil Fawlty that shouts at his car!

Goff suggests that the main argument against panpsychism (approximately the view that everything everywhere is conscious: I skip here various caveats and clarifications which don’t affect the main argument) is just that it is ‘crazy’ – that it conflicts with common sense. He goes on to rebut this by pointing out that relativity and Darwinism both conflict with common sense too. This seems dangerously close to the classic George Spiggott argument so memorably refuted in the 1967 film Bedazzled;

Stanley Moon: You’re a nutcase! You’re a bleedin’ nutcase!
George Spiggott: They said the same of Jesus Christ, Freud, and Galileo.
Stanley Moon: They said it of a lot of nutcases too.
George Spiggott: You’re not as stupid as you look, are you, Mr. Moon?

But really we’re fighting a straw man; the main argument against panpsychism is surely not a mere appeal to common sense. (Who are these philosophers who stick to common sense and how do they get any work done?) One of the candidates for the main counter-argument must surely be the difficulty of saying exactly which of the teeming multi-layered dynasties of entities in the universe we deem to be conscious, whether composite entities qualify, and if so, how on Earth that works. Another main line of hostile argument must be the problem of explaining how these ubiquitous consciousnesses relate to the ordinary kind that appears to operate in brains. Perhaps the biggest objection of all is to panpsychism’s staggering ontological profligacy. William of Occam told us to use as few angels as possible; panpsychism stations one in every particle of the cosmos.

How could such a massive commitment represent simplicity? The thing is, Goff isn’t starting from nothing; he already has another metaphysical commitment. He believes that things have an intrinsic nature apart from their physical properties. Science, on this view, is all about a world that often, rather mysteriously, gets called the ‘external’ world. It tells us about the objectively measurable properties of things, but nothing at all about the things in themselves. No doubt Goff has reasons for thinking this that he has set out elsewhere, probably in the book of which he helpfully provides an interesting chapter.

But whatever his grounds may be, I think this view is itself hopeless. For one thing, if these intrinsic natures have no physical impact, nothing we ever say or write can have been caused by them. That seems worrying. Ah, but here I’m inadvertently beginning to make Goff’s case for him, because what else is there that never causes any of the things we say about it? Qualia, phenomenal consciousness, the sort Goff is clearly after. Now if we’ve got two things with this slippery acausal quality, might it not be a handy simplification if they were the same thing? This is very much the kind of simplification that Goff wants to suggest. We know or assume that everything has its own intrinsic nature. In one case, ourselves, we know what that intrinsic nature is like; it’s conscious experience. So is it not the simplest way if we suppose that consciousness is the intrinsic nature of everything? Voila.

There’s no denying that that does make some sense. We do indeed get simplicity of a sort – but only at a price. Once we’ve taken on the huge commitment of intrinsic natures, and once we’ve also taken on the commitment of ineffable interior qualia, then it looks like good sense to combine the two commitments into, as it were, one easy payment. But it’s far better to avoid these onerous commitments in the first place.

Let me suggest that for one thing, believing in intrinsic natures poisons the essential concept of identity. Leibniz tells us that the identity of a thing resides in its properties; if all the properties of A are the same as all the properties of B, then A is B. But if everything has an unobservable inner nature as well as its observable properties, its identity is forever unknowable and there can never be certainty that this dagger I see before me is actually the same as the identical-looking one I saw in the same place a moment ago. Its inward nature might have changed.

Moreover, even if we take on both intrinsic natures and ineffable qualia, there are several good reasons to think the two must be different. If we are to put aside my fear that my dagger may have furtively changed its intrinsic nature, it must surely be that the intrinsic nature of a thing generally stays the same – but consciousness constantly changes? In fact, consciousness goes away regularly every night; does our intrinsic nature disappear too? Do sleeping people somehow not have an intrinsic nature – or if they have one, doesn’t it persist when they wake, alongside and evidently distinct from their consciousness? Or consider what consciousness is like: consciousness is consciousness of things; qualia are qualia of red, or middle C, or the smell of bacon; how can entities with no sensory organs have them? Is there a quale of nothing? There might be answers, but I don’t think they’re going to be easy ones.

There’s another problem lurking in wait, too, I think. Goff, I think, assumes that we all exist and have intrinsic natures, but he cannot have any good reason to think so, because intrinsic natures leave no evidence. We who believe that the identity of things is founded in their observable properties have empirical grounds to believe that there are many conscious entities out there. For him the observable physics must be strictly irrelevant. He has immediate knowledge only of one intrinsic nature, his own, which he takes to be his consciousness;  the most parsimonious conclusion to draw from there is not that the universe is full of intrinsic natures and consciousnesses of a similar kind, but that there is precisely one; Goff, the single consciousness that underpins everything. He seems to me, in other words, to have no defence against some kind of solipsism; simplicity makes it most likely that he lives in his own dream, or at best in a world populated by some kind of zombies.

Crazy? Well, it’s a little strange…


Watch the full video with related content here.

The discussion following the presentations I featured last week.

multi-factor-awareness-cIs awareness an on/off thing? Or is it on a sort of dimmer switch? A degree of variation seems to be indicated by the peculiar vagueness of some metal content (or is that just me?). Fazekas and Overgaard argue that even a dimmer switch is far too simple; they suggest that there are at least four ways in which awareness can be graduated.

First, interestingly, we can be aware of things to different degrees on different levels. Our visual system identifies dark and light, then at a higher level identifies edges, at a higher level again sees three dimensional shapes, and higher again, particular objects. We don’t have to be equally clear at all levels, though. If we’re looking at the dog, for example, we may be aware that in the background is the cat, and a chair; but we are not distinctly aware of the cat’s whiskers or the pattern on the back of the chair. We have only a high-level awareness of cat and chair. It can work the other way, too; people who suffer from face-blindness, for example, may be well able to describe the nose and other features of someone presented to them without recognising that the face belongs to a friend.

That is certainly not the only way our awareness can vary, though; it can also be of higher or lower quality. That gives us a nice two-dimensional array of possible vagueness; job done? Well no, because Fazekas and Overgaard think quality varies in at least three ways.

  • Intensity
  • Precision
  • Temporal Stability

So in fact we have a matrix of vagueness which has four dimensions, or perhaps I should say three plus one.

The authors are scrupulous about explaining how intensity, precision, and temporal stability probably relate to neuronal activity, and they are quite convincing; if anything I should have liked a bit more discussion of the phenomenal aspects – what are some of these states of partially-degraded awareness actually like?

What they do discuss is what mechanism governs or produces their three quality factors. Intensity, they think, comes from the allocation of attention. When we really focus on something, more neuronal resources are pulled in and the result is in effect to turn up the volume of the experience a bit.

Precision is also connected with attention; paying attention to a feature produces sharpening and our awareness becomes less generic (so instead of merely seeing something is red, we can tell whether it is magenta or crimson, and so on). This is fair enough, but it does raise a small worry as to whether intensity and precision are really all that distinct. Mightn’t it just be that enhancing the intensity of a particular aspect of our awareness makes it more distinct and so increases precision? The authors acknowledge some linkage.

Temporal stability is another matter. Remember we’re not talking here about whether the stimulus itself is brief or sustained but whether our awareness is constant. This kind of stability, the authors say, is a feature of conscious experience rather than unconscious responses and depends on recurrence and feedback loops.

Is there a distinct mechanism underlying our different levels of awareness? The authors think not; they reckon that it is simply a matter of what quality of awareness we have on each level (I suppose we have to allow for the possibility if not the certainty that some levels will be not just poor quality but actually absent. I don’t think I’m typically aware of all possible levels of interpretation when considering something.

So there is the model in all its glory; but beware; there are folks around who argue that in fact awareness is actually not like this, but in at least some cases is an on/off matter. Some experiments by Asplund were based on the fact that if a subject is presented with two stimuli in quick succession, the second is missed. As the gap increases, we reach a point where the second stimulus can be reported; but subjects don’t see it gradually emerging as the interval grows; rather With one gap it’s not there, while With a slightly larger one, it is.

Fazekas and Overgaard argue that Asplund failed to appreciate the full complexity of the graduation that goes on; his case focuses too narrowly on precision alone. In that respect there may be a sharp change, but in terms of intensity or temporal stability, and hence in terms of awareness overall, they think there would be graduation.

A second rival theory which the authors call the ‘levels of processing view’ or LPV, suggests that while awareness of low-level items is graduated, at a high level you’re either aware or not. These experiments used colours to represent low-level features and numbers for high level ones, and found that while there was graduated awareness of the former, with the latter you either got the number or you didn’t.

Fazekas and Overgaard argue that this is because colours and numbers are not really suitable for this kind of comparison. Red and blue are on a continuous scale and one can morph gradually into the other; the numeral ‘7’ does not gradually change into ‘9’. This line of argument made sense to me, but on reflection caused me some problems. If it is true that numerals are just distinct in this way, that seems to me to concede a point that makes the LPV case seem intuitively appealing; in some circumstances things just are on/off. It seemed true at first sight that numerals are distinct in this way, but when I thought back to experiences in the optician’s chair, I could remember cases where the letter I was trying to read seemed genuinely indeterminate between two or even more alternatives. Now though, if My first thought was wrong and numerals are not in fact inherently distinct in this way, that seems to undercut Fazekas and Overgaard’s counter-argument.

On the whole the more complex model seems to provide better explanatory resources and I find it broadly convincing, but I wonder whether a reasonable compromise couldn’t be devised, allowing that for certain experiences there may be a relatively sharp change of awareness, with only marginal graduation? Probably I have come up with a vague and poorly graduated conclusion…


Watch the full video with related content here.

What is the problem about consciousness? A Royal Institution video with interesting presentations (part 2 another time).

Anil Seth presents a striking illusion and gives an optimistic view of the ability of science to tackle the problem; or maybe we just get on with the science anyway? The philosophers may ask good questions, but their answers have always been wrong.

Barry Smith says that’s because when the philosophers have sorted a subject out it moves over into science. One problem is that we tend to miss thinking about consciousness and think about its contents. Isn’t there a problem: to be aware of your own awareness changes it? I feel pain in my body, but could consciousness be in my ankle?

Chris Frith points out that actually only a small part of our mental activity has anything to do with consciousness, and in fact there is evidence to show that many of the things we think are controlled by conscious thought really are not: a vindication of Helmholtz’s idea of unconscious inference. Thinking about your thinking messes things up?  Try asking someone how happy they feel – guaranteed to make them less happy immediately…

god-helmetIs God a neuronal delusion? Dr Michael Persinger’s God Helmet might suggest so. This nice Atlas Obscurapiece’ by Linda Rodriguez McRobbie finds a range of views.

The helmet is far from new, partly inspired by Persinger’s observations back in the 1980s of a woman whose seizure induced a strong sense of the presence of God. In the 90s, Persinger decided to see whether he could stimulate creativity by applying very mild magnetic fields to the right hemisphere of the brain. Among other efforts he tried reproducing the kind of pattern of activity he had seen in the earlier seizure and, remarkably, succeeded in inducing a sense of the presence of God in some of his subjects. Over the years he has continued to repeat this exercise and others like it; the helmet doesn’t always induce a sense of God; sometime people fall asleep, sometimes (like Richard Dawkins) they get very little out of the experience. Sometimes they have a vivid sense of some presence, but don’t identify it as God.

Could this, though, be the origin of theism? Do all religious experiences stem from aberrant neuronal activity? It seems pretty unlikely to me. Quite apart from the severely reductive nature of the hypothesis, it doesn’t seem to offer a broad enough account. People arrive at a belief in God by various routes, and many of them do not rely on any sense of immediate presence. For people brought up in religious families, and for pretty much everyone in say, medieval Europe, God is or was just a fact, a natural part of the world. Some people come to belief along a purely rational path, or by one that seeks out meaning for life and the world. Such people may not require any sense of a personal presence of the divine; or they may earnestly desire it without attaining it – without thereby losing their belief. Even those whose belief stems from a sense of mystical contact with God do not necessarily or I think even typically experience it as a personal presence somewhere in the room; they might feel that instead they have ascemded to a higher sphere, or experience a kind of communion which has nothing to do with location.

Equally, or course, that presence in the room may not seem like God. It might be more like the thing under the bed, or like the malign person who just might be in the shadows behind you on the dark lonely path. People who wake from sleep without immediately regaining motor control of their body may have the sense of someone sitting or pressing on their chest; a ‘hag’ or other horrid being, not God. Perhaps Persinger is lucky that his experiments do not routinely induce horror. Persinger believes that the way it works is that by stimulating one hemisphere of the brain, you make the other hemisphere aware of it and this other presence is translated into an external person of some misty kind. I doubt it is quite like that. I do suspect that a sense of ‘someone there’ may be one of the easiest feelings to induce because the brain is predisposed towards it, just as we are predisposed towards seeing faces in random graphic noise. The evolutionary advantages of a mental system which is always ready to shout ‘look behind you!’ are surely fairly easy to see in a world where our ancestors always needed to consider the possibility that an enemy or a predator might indeed be lurking nearby. The attention wasted on a hundred false alarms is easily outbalanced by the life-saving potential of one justified one.

So what is going on? Perhaps not that much after all. Reproducing Persinger’s results has proved difficult in most cases and it seems plausible that the effect actually depends as much on suggestion as anything else. Put people into a special environment, put a mild magnetic buzz into their scalp, and some of them will report interesting experiences, especially if they have been primed in advance to expect something of the kind. It is perfectly reasonable to think that electrical interference might affect the brain, but Persinger’s magnets really are pretty mild and it sees rather unlikely that the fields in question are really strong enough to affect the firing of neurons through the skull and into the rather resistant watery mass of the brain. In addition I would have to say that the whole enterprise has a curiously dated air about it; the faith in a rather simple idea of hemispheric specialisation, the optimistic conviction that controlling the brain is going to turn out a pretty simple business, and perhaps even the love of a good trippy experience, all seem strongly rooted in late twentieth century thinking.

Perhaps in the end the God Helmet is really another sign of an issue which has become more and more evident lately. The strong suggestibility of the human mind means that sometimes even in neurological science, we are in danger of getting the interesting results we really wanted all along, however misleading or ill-founded they may really be?

correspondentA nice podcast about the late great Hilary Putnam. Meaning ain’t in the head!

Joe Kern has conducted a careful investigation of identity and personhood, paring away the contingent properties and coming up with a form of materialist reincarnation which is a more serious cousin of my daughter’s metempsychotic solipsism.

A less attractive conclusion (I may be biased) but with thanks to Jesus Olmo, here’s an examination of  whether the Internet is a superorganism of a kind that may attain consciousness. I remember when they used to say the brain was like a telephone exchange (my daughter would probably think that was a kind of second-hand market for mobes).

dagstuhl-ceIs there an intermediate ethical domain, suitable for machines?

The thought is prompted by this summary of an interesting seminar on Engineering Moral Agents, one of the ongoing series hosted at Schloss Dagstuhl. It seems to have been an exceptionally good session which got into some of the issues in a really useful way – practically oriented but not philosophical naive. It noted the growing need to make autonomous robots – self-driving cars, drones, and so on – able to deal with ethical issues. On the one hand it looked at how ethical theories could be formalised in a way that would lend itself to machine implementation, and on the other how such a formalisation could in fact be implemented. It identified two broad approaches: top-down, where in essence you hard-wire suitable rules into the machine, and bottom-up, where the machine learns for itself from suitable examples. The approaches are not necessarily exclusive, of course.

The seminar thought that utilitarian or Kantian theories of morality were both prima facie candidates for formalisation. Utilitarian or more broadly, consequentialist theories look particularly promising because calculating the optimal value (such as the greatest happiness of the greatest number) achievable from the range of alternatives on offer looks like something that can be reduced to arithmetic fairly straightforwardly. There are problems in that consequentialist theories usually yield at least some results that look questionable in common sense terms (finding the initial values to slot into your sums is also a non-trivial challenge – how do you put a clear numerical value on people’s probable future happiness?)

A learning system eases several of these problems. You don’t need a fully formalised system (so long as you can agree on a database of examples). But you face the same problems that arise for learning systems in other contexts; you can’t have the assurance of knowing why the machine behaves as it does, and if your database had unnoticed gaps or bias you may suffer from sudden catastrophic mistakes.  The seminar summary rightly notes that a machine that learned its ethics will not be able to explain its behaviour; but I don’t know that that means it lacks agency; many humans would struggle to explain their moral decisions in a way that would pass muster philosophically. Most of us could do no more than point to harms avoided or social rules observed at best.

The seminar looked at some interesting approaches, mentioned here with tantalising brevity: Horty’s default logic, Sergot’s STIT (See To It That) logic; and the possibility of drawing on the decision theory already developed in the context of micro-economics. This is consequentialist in character and there was an examination of whether in fact all ethical theories can be restated in consequentialist terms (yes, apparently, but only if you’re prepared to stretch the idea of a consequence to a point where the idea becomes vacuous). ‘Reason-based’ formalisations presented by List and Dietrich interestingly get away from narrow consequentialisms and their problems using a rightness function which can accommodate various factors.

The seminar noted that society will demand high, perhaps precautionary standards of safety from machines, and floated the idea of an ethical ‘black box’ recorder. It noted the problem of cultural neutrality and the risk of malicious hacking. It made the important point that human beings do not enjoy complete ethical agreement anyway, but argue vigorously about real issues.

The thing that struck me was how far it was possible to go in discussing morality when it is pretty clear that the self-driving cars and so on under discussion actually have no moral agency whatever. Some words of caution are in order here. Some people think moral agency is a delusion anyway; some maintain that on the contrary, relatively simple machines can have it. But I think for the sake of argument we can assume that humans are moral beings, and that none of the machines we’re currently discussing is even a candidate for moral agency – though future machines with human-style general understanding may be.

The thing is that successful robots currently deal with limited domains. A self-driving car can cope with an array of entities like road, speed, obstacle, and so on; it does not and could not have the unfettered real-world understanding of all the concepts it would need to make general ethical decisions about, for example, what risks and sacrifices might be right when it comes to actual human lives. Even Asimov’s apparently simple Laws of Robotics required robots to understand and recognise correctly and appropriately the difficult concept of ‘harm’ to a human being.

One way of squaring this circle might be to say that, yes, actually, any robot which is expected to operate with any degree of autonomy must be given a human-level understanding of the world. As I’ve noted before, this might actually be one of the stronger arguments for developing human-style artificial general intelligence in the first place.

But it seems wasteful to bestow consciousness on a roomba, both in terms of pure expense and in terms of the chronic boredom the poor thing would endure (is it theoretically possible to have consciousness without the capacity for boredom?). So really the problem that faces us is one of making simple robots, that operate on restricted domains, able to deal adequately with occasional issues from the unrestricted domain of reality. Now clearly ‘adequate’ is an important word there. I believe that in order to make robots that operate acceptably in domains they cannot understand, we’re going to need systems that are conservative and tend towards inaction. We would not, I think, accept a long trail of offensive and dangerous behaviour in exchange for a rare life-saving intervention. This suggests rules rather than learning; a set of rules that allow a moron to behave acceptably without understanding what is going on.

Do these rules constitute a separate ethical realm, a ‘sub-ethics’ that substitute for morality when dealing with entities that have autonomy but no agency? I rather think they might.

Here’s another IAI video on Explaining the Inexplicable: it honestly doesn’t have much to do with consciousness but today, for reasons I can’t quite put my finger on, it felt appropriate…
Watch more videos on iai.tv

Nancy Cartwright says there are those who like the “Big Explainers”; theories that offer to explain everything: then there are those who cherish mystery: she situates herself in the middle somewhere – the ‘Missouri’ position. We know that some things can be explained, so let’s see what you got.

Piers Corbyn thinks the incomplete Enlightenment project has been undone by a fondness for grand theories and models (not least over climate change). We need to get back on track, and making scientific falsehood illegal would help.

James Ladyman thinks modern physics has removed the certainty that everything can be explained. Nevertheless, science has succeeded through its refusal to accept that any domain is in principle inexplicable. We should carry on and instead of trying for grand total explanations we should learn to live with partial success.

I don’t know much about this, but I reckon an explanation is an account that, when understood, stops a certain kind of worry. We may notice that most explanations reduce or simplify the buzzing complexity of the world; once we have the explanation we need only worry about a few general laws instead of millions of particles; or perhaps we know we need only worry about a simpler domain to which the original can be reduced. In short, the desire for explanation is akin to the desire for tidiness, or let’s politely call it elegance.