Mary and the Secret Stones

Adam Pautz has a new argument to show that consciousness is irreducible (that is, it can’t be analysed down into other terms like physics or functions). It’s a relatively technical paper – a book length treatment is forthcoming, it seems – but at its core is a novel variant on the good old story of Mary the Colour Scientist. Pautz provides several examples in support of his thesis, and I won’t address them all, but a look at this new Mary seems interesting.

Pautz begins by setting out a generalised version of how plausible reductive accounts must go. His route goes over some worrying territory – he is quite Russellian, and he seems to take for granted the old and questionable distinction between primary and secondary qualities. However, if the journey goes through some uncomfortable places, the destination seems to be a reasonable place to be. This is a  moderately externalist kind of reduction which takes consciousness of things to involve a tracking relation to qualities of real things out there. We need not worry about what kinds of qualities they are for current purposes, and primary and secondary qualities must be treated in a similar way. Pautz thinks that if he can show that reductions like this are problematic, that amounts to a pretty good case for irreducibility.

So in Pautz’s version, Mary lives on a planet where the outer surfaces of everything are black, grey, or white. However, on the inside they are brilliantly coloured, with red, reddish orange, and green respectively. All the things that are black outside are red inside, and so on, and this is guaranteed by a miracle ‘chemical’ process such that changes to the exterior colour are instantly reflected in appropriate changes inside. Mary only sees the outsides of things, so she has never seen any colours but black, white and grey.

Now Mary’s experience of black is a tracking relation to black reflectances, but in this world it also tracks red interiors. So does she experience both colours? If not, then which? A sensible reductionist will surely say that she only experiences the external colour, and they will probably be inclined to refine their definitions a little so that the required tracking relation requires an immediate causal connection, not one mediated through the oddly fixed connection of interior and exterior colours. But that by no means solves the problem, according to Pautz. Mary’s relation to red is only very slightly different to her relation to black. Similar relations ought to show some similarity, but in this case Mary’s relation to black is a colour experience, whereas her relation to red, intrinsically similar, is not a colour experience – or an experience of any kind! If we imagine Martha in another world experiencing a stone with a red exterior, then Martha’s relation to red and Mary’s are virtually identical, but have no similarity whatever. Suppose you had a headache this morning, suggests Pautz, could you then say that you were in a nearly identical state this afternoon, but that it was not the state of experiencing a headache; in fact it was no experience at all (not even, presumably, the experience of not having a headache).

Pautz thinks that examples of this kind show that reductive accounts of consciousness cannot really work, and we must therefore settle for non-reductive ones. But he is left offering no real explanation of the relation of being conscious of something; we really have to take that as primitive, something just given as fundamental. Here I can’t help but sympathise with the reductionists; at least they’re trying! Yes, no doubt there are places where explanation has to stop, but here?

What about Mary? The thing that troubles me most is that remarkable chemical connection that guarantees the internal redness of things that are externally black. Now if this were a fundamental law of nature, or even some logical principle, I think we might be willing to say that Mary does experience red – she just doesn’t know yet (perhaps can never know?) that that’s what black looks like on the inside. If the connection is a matter of chance, or even guaranteed by this strange local chemistry, I’m not sure the similarity of the tracking relations is as great as Pautz wants it to be. What if someone holds up for me a series of cards with English words on one side? On the other, they invariably write the Spanish equivalent. My tracking relation to the two words is very similar, isn’t it, in much the same way as above? So is it plausible to say I know what the English word is, but that my relation to the Spanish word is not that of knowing it – that in fact that relation involves no knowledge of any kind? I have to say I think that is perfectly plausible.

I can’t claim these partial objections refute all of Pautz’s examples, but I’m keeping the possibility of reductive explanations open for now.

 

Third Wave AI?

DARPA is launching a significant new AI initiative; it could be a bad mistake.

DARPA (The Defense Advanced Research Projects Agency)has an awesome record of success in promoting the development of computer technology; without its interventions we probably wouldn’t be talking seriously about self-driving cars, and we might not have any internet. So any big DARPA project is going to be at least interesting and quite probably groundbreaking. This one seeks to bring in a Third Wave of AI. The first wave, on this showing, was a matter of humans knowing what needed to be done and just putting that knowledge into coded rules (this actually smooshes together a messy history of some very different approaches). The second wave involves statistical techniques and machines learning for themselves; recently we’ve seen big advances from this kind of approach. While there’s still more to be got out of these earlier waves, DARPA foresee a third one in which context-based programs are able to explain and justify their own reasoning. The overall idea is well explained by John Launchbury in this video.

In many ways this is timely, as one of the big fears attached to recent machine learning projects has arisen from the fact that there is often no way for human beings to understand, in any meaningful sense, how they work. If you don’t know how a ‘second wave’ system is getting its results, you cannot be sure it won’t suddenly go wrong in bizarre ways (and in fact they do). There have even been moves to make it a legal requirement that a system is explicable.

I think there are two big problems, though. The demand for an explanation implicitly requires one that human beings can understand. This might easily hobble computer systems unnecessarily, denying us immensely useful new technologies that just happen to be slightly beyond our grasp. One of the limitations of human cognition, for example, is that we can only hold so many things in mind at once. Typically we get round this by structuring and dividing problems so we can deal with simple pieces one at a time; but it’s likely there are cognitive strategies that this rules out. Already I believe there are strategies in chess, devised by computers, that clearly work but whose conditional structure is so complex no human can understand them intuitively. So it could be that the third wave actually restores some of the limitations of the first, by tying progress to things humans already get.

The second problem is that we still have no real idea how much of human cognition works. Recent advances in visual recognition have brought AI to levels that seem to match or exceed human proficiency, but the way they break down suddenly in weird cases is so unlike human thought that it shows how different the underlying mechanisms must still be. If we don’t know how humans do explainable recognition, where is our third wave going to come from?

Of course, the whole framework of the three waves is a bit of a rhetorical trick. It rewrites and recategorises the vastly complex, contentious history of AI into something much simpler; it discreetly overlooks all the dead ends and winters of disillusion that actually featured quite prominently in that story. The result makes the ‘third wave’ seem a natural inevitability, so that we ask only when and by whom, not whether and how.

Still, even projects whose success is not inevitable sometimes come through…

The Mark of the Mental

An interesting review by Guillaume Fréchette, of Mark Textor’s new book Brentano’s Mind. Naturally this deals among other things with the claim for which Brentano is most famous; that intentionality is the distinctive feature of the mental (and so of thoughts, consciousness, awareness, and so on). Textor apparently makes five major claims, but I only want to glance at the first one, that in fact ‘Intentionality is an implausible mark of the mental’.

What was Brentano on about, anyway? Intentionality is the property of pointing at, or meaning, or being about, something. It was discussed in medieval philosophy and then made current again by Brentano when he, like others, was trying to establish an empirical science of psychology for the first time. In his view:

intentional in-existence, the reference to something as an object, is a distinguishing characteristic of all mental phenomena. No physical phenomenon exhibits anything similar.”

Textor apparently thinks that there’s a danger of infinite regress here. He reads Brentano’s mention of in-existence as meaning we need to think of an immanent object ‘existing in’ our mind in order to think of an object ‘out there’; but in that case, doesn’t thinking of the immanent object require a further immanent object, and so on. There seems to be more than one way of escaping this regress, however. Perhaps we don’t need to think of the immanent object, it just has to be there. Maybe awareness of an external object and introspecting an internal one are quite different processes, the latter not requiring an immanent object. Perhaps the immanent object is really a memory, or perhaps  the whole business of immanent objects reads more into Brentano than we should.

Textor believes Brentano is pushed towards primitivism – hey, this just is the mark of the mental, full stop – and thinks it’s possible to do better. I think this is nearly right, except it assumes Brentano must be offering a theory, even if it’s only the bankrupt one of primitivism. I think Brentano observes that intentionality is the mark of the mental, and shrugs. The shrug is not a primitivist thesis, it just expresses incomprehension. To say that one does not know x is not to say that x is unknowable. I could of course be wrong, both about Brentano, and particularly about Textor.

What I think you have to do is go back and ask why Brentano thought intentionality was the mark of the mental in the first place. I think it’s a sort-of empirical observation. All thoughts are about, or of, something. If we try to imagine a thought that isn’t about anything, we run into difficulty. Is there a difference between not thinking of anything and not thinking at all (thinking of nothing may be a different matter)? Similarly, can there be awareness which isn’t awareness of anything? One can feel vast possible disputes about this opening up even as we speak, but I should say it is at least pretty plausible that all mental states feature intentionality.

Physical objects, such as stones, are not about anything; though they can be, like books, if we have used the original intentionality of our minds to bestow meaning on them; if in fact we intend them to mean something. Once again, this is disputable, but not, to me, implausible.

Intentionality remains a crucially important aspect of the mind, not least because we have got almost nowhere with understanding it. Philosophically there are of course plenty of contenders; ideas about how to build intentionality out of information, out of evolution, or indeed to show how original intentionality is a bogus idea in the first place. To me, though, it’s telling that we’ve got nowhere with replicating it. Where AI would seem to require some ability to handle meaning – in translation, for example – it has to be avoided and a different route taken. While it remains mysterious, there will always be a rather large hole in our theories of consciousness.

Intentionality and Introspection

Some people, I know, prefer to get their philosophy in written form; but if you like videos it’s well worth checking out Richard Brown’s YouTube series Consciousness This Month.

This one, Ep 4, is about mental contents, with Richard setting out briefly but clearly a couple of the major problems (look at the camera, Richard!).

Introspection, he points out, is often held to be incorrigible or infallible on certain points. You can be wrong about being at the dentist, but you can’t be wrong about being in pain. This is because of the immediacy of the experience. In the case of the dentist, we know there is a long process between light hitting your retina and the dentist being presented to consciousness. Various illusions and errors provide strong evidence for the way all sorts of complex ‘inferences’ and conclusions have been drawn by your unconscious visual processing system before the presence of the dentist gets inserted into your awareness in the guise of a fact. There is lots of scope for that processing to go wrong, so that the dentist’s presence might not be a fact at all. There’s much less processing involved in our perception of someone tugging on a tooth, but still maybe you could be dreaming or deluded. But the pain is inside your mind already; there’s no scope for interpretation and therefore no scope for error.

My own view on this is that it isn’t our sense data that have to be wrong, it’s our beliefs about our experiences. If the results of visual processing are misleading, we may end up with the false belief that there is a dentist in the room. But that’s not the only way for us to pick up false beliefs, and nothing really prevents our holding false beliefs about being in pain. There is some sense in which the pain can’t be wrong, but thatks more a matter of truth and falsity being properties of propositions, not of pains.

Richard also sketches the notion of intentionality, or ‘aboutness’, reintroduced to Western philosophy as a key idea by Brentano, who took it to be the distinguishing feature of the mental. When we think about things it seems as if our thought is directed towards an external object. In itself that seems to require some explanation, but it gets especially difficult when you consider that we can easily talk about non-existent or even absurd things. This is the kind of problem that caused Meinong to introduce a distinction between existence and subsistence, so that the objects of thought could have a manageable ontological status without being real in the same way as physical objects.

Regulars may know that my own view is that consciousness is largely a matter of recognition. Humans, we might say, are superusers of recognition. Not only can we recognise objects, we can recognise patterns and use them for a sort of extrapolation. The presence of a small entity is recognised, but also a larger entity of which it is part. So we recognise dawn, but also see that it is part of a day. From the larger entity we can recognise parts not currently present, such as sunset, and this allows us to think about entities that are distant in time or space. But the same kind of extrapolation allows to think about things that do not, or even could not, exist.

I’m looking forward to seeing Richard’s future excursions.

Secrets of Consciousness

Here’s an IAI discussion between Philip Goff, Susan Blackmore, and Nicholas Humphrey, chaired by Barry Smith. There are some interesting points made, though overall it may have been too ambitious to try to get a real insight into three radically different views on the broad subject of phenomenal consciousness in a single short discussion. I think Goff’s panpsychism gets the lion’s share of attention and comes over most clearly. In part this is perhaps because Goff is good at encapsulating his ideas briefly; in part it may be because of the noticeable bias in all philosophical discussion towards the weirdest idea getting most discussion (it’s more fun and more people want to contradict it); it may be partly just a matter of Goff being asked first and so getting more time.

He positions panpsychism (the view, approximately, that consciousness is everywhere) attractively as the alternative to the old Scylla and Charybdis of dualism on oone hand and over-enthusiastic materialist reductionism on the other. He dodges some of the worst of the combination problem by saying that his version on panpsychism doesn’t say that every arbitrary object – like a chair has to be consciousness, only that there is a general, very simple form of awareness in stuff geneerally – maybe at the level of elementary particles. Responding to the suggestion that panpsychism is the preference for theft over honest toil (just assume consciousness) he rightly says that not all explanations have to be reductive explanations, but makes a comparison I think is dodgy by saying that James Clerk Maxwell, after all, did not reduce electromagnetism to mass or other known physical entities. No, but didn’t Maxwell reduce light, electricity, and magnetism to one phenomenon? (He also provided elegant equations, which I think no-one is about to do for consciousness (Yes, Tononi, put your hand down, we’ll talk about that another time)).

Susan Blackwell is a pretty thorough sceptic: there really is no such thing as subjective consciousness. If we meditate, she says, we may get to a point where we understand this intuitively, but alas, it is hard to explain so convincingly in formal theoretical terms. Maybe that’s just what we should expect though.

Humphrey is also a sceptic, but of a more cautious kind: he doesn’t want to say that there is no such thing as consciousness, but he agrees it is a kind of illusion and prefers to describe it as a work of art (thereby, I suppose, avoiding objections along the lines that consciousness can’t be an illusion because the having of illusions presupposes the having of consciousness by definition). He positions himself as less of a sceptic in some ways than the other two, however: they, he says, hold that consciousness cannot be observed through behaviour: but if not, what are we even doing talking about it?

Alters of the Cosmos

We are the alternate personalities of a cosmos suffering from Dissociative Identity Disorder (DID). That’s the theory put forward by Bernardo Kastrup in a recent JCS paper and supported by others in Scientific American. I think there’s no denying the exciting elegance of the basic proposition, but in my view the problems are overwhelming.

DID is now the correct term for what used to be called Multiple Personality Disorder, a condition in which different persons appear to inhabit the same body, with control passing between them and allowing them to exhibit distinct personalities, different knowledge, and varied behaviour. Occasionally it has been claimed that different ‘alters’ can even change certain physical characteristics of the host body, within limits. Sceptical analysis notes that the incidence of DID has been strongly correlated with its portrayal in the media. A popular film about multiple personalities always seems to bring a boom in new diagnoses, and in fact an early ‘outbreak’ corresponded with the popularity of ‘Jekyll and Hyde’. Sceptics have suggested that DID may often, or always, be iatrogenic in part, with the patient confabulating the number and type of alter the therapist seems to expect.

Against that, the SA piece cites findings that when blind alters were in control, normal visual activity in the brain ceased. This is undoubtedly striking, though a caveat should be entered over our limited ability to spot what patterns of brain activity go along with confabulation, hypnosis, self-deception, etc. I think the research cited establishes pretty clearly that DID is ‘real’ (though not that patients correctly understand its nature), but then I believe only the hardest of sceptics ever thought DID patients were merely weird liars.

Does DID have the metaphysical significance Kastrup would give it, though? One fundamental problem, to get it up front, is this; if we, as physical human beings, are generated by DID in the cosmic consciousness, and that DID is literally the same thing as the DID observed in patients, how come it doesn’t generate a new body for each of the patient’s alters? There doesn’t seem to be a clear answer on this. I would say that the most reasonable response would be to deny that cosmic and personal DID are exactly the same phenomena and regard them as merely analogous, albeit perhaps strongly so.

Kastrup’s account does tackle a lot of problems. He approaches his thesis by considering related approaches such as panpsychism or cosmopsychism, and the objections to them, notably the combination or decombination problems, which concern how we get from millions of tiny awarenesses, or from one overarching one, to the array of human and animal ones we actually find in the world. His account seems clear and sensible to me, providing convincing brief analyses of the issues.

In Kastrup’s system we begin with a universal consciousness which consists of a sort of web of connected thoughts and feelings. Later there will be perceptions, but at the outset there’s nothing to perceive; I’m not sure what the thoughts could be about, either – pure maths, perhaps – but they arise from the inherent tendency of the cosmic consciousness to self-excite (just as a normal human mind, left without external stimulus, does not fall silent, but generates thoughts spontaneously). The connections between the thoughts may be associations, logical connections, inspirational, and so on. I’m not clear whether Kastrup envisages all these thoughts and feelings being active at the same time, or whether new ones can be generated and added in. There is a vast amount of metaphysical work to be done on this kind of aspect of the theory – enough for several generations of philosophers – and it may not be fair to expect Kastrup to have done it all, let alone get it all into this single paper.

I think the natural and parsimonious way to go from there would be solipsism. The cosmic consciousness is all there is, and these ideas about other people and external reality are just part of its random musings. The only argument against this simple position is that our experience insistently and pretty consistently tells us about a world of planets, animals, and evolution which not only forces itself on our attention, but on examination provides some rather good partial explanations of our nature and cognitive abilities. But to accept that argument is to surrender to the conventional view, which Kastrup – he identifies as an idealist – is committed to rejecting.

So instead he takes a different view. Somehow (?), islands of the overall web of cosmic consciousness may get detached. They then become dissociated consciousnesses, and can both perceive and be perceive. Since their associative links with the rest of the cosmos have been broken, I don’t quite know why they don’t lapse into solipsistic beings themselves, unable to follow the pattern of their thoughts beyond its own compass.

In fact, and this may be the strangest thing in the theory, our actual bodies, complete with metabolism and all the rest, are the appearance of these metaphysical islands: ‘living organisms are the revealed appearance of alters of universal consciousness’. Quite why the alters of universal consciousness should look like evolved animals, I don’t know. How does sex between these alters give rise to a new dissociative island in the form of a new human being; what on earth happens when someone starves to death? It seems that Kastrup really wants to have much of the conventional world back; a place where autonomous individuals with private thoughts are nevertheless able to share ideas about a world which is not just the product of their imaginations. But it’s forbiddingly difficult to get there from his starting position. For once, weirder ideas might be easier to justify.

These are, of course, radical new ideas; but curiously they seem to me to bear a strong resemblance to the old ones of the Gnostics. They (if my recollection is right) thought that the world started with the perfect mind of God, which then through some inscrutable accident shed fragmentary souls (us) which became bound in the material world, with their own true nature hidden from them. I don’t make the comparison to discredit Kastrup’s ideas; on the contrary if it were me I should be rather encouraged to have these ancient intellectual forebears.

Flat thinking

Nick Chater says The Mind Is Flat in his recent book of that name. It’s an interesting read which quotes a good deal of challenging research (although quite a lot is stuff that I imagine would be familiar to most regular readers here). But I don’t think he establishes his conclusion very convincingly. Part of the problem is a slight vagueness about what ‘flatness’ really means – it seems to mean a few different things and at times he happily accepts things that seem to me to concede some degree of depth. More seriously, the arguments range from pretty good through dubious to some places where he seems to be shooting himself in the foot.

What is flatness? According to Chater the mind is more or less just winging it all the time, supplying quick interpretations of the sensory data coming in at that moment (which by the way is very restricted) but having no consistent inner core; no subconscious, no self, and no consistent views. We think we have thoughts and feelings, but that’s a delusion; the thoughts are just the chatter of the interpreter, and the feelings are just our interpretation of our own physiological symptoms.

Of course there is a great deal of evidence that the brain confabulates a great deal more than we realise, making up reasons for our behaviour after the fact. Chater quotes a number of striking experiments, including the famous ones on split-brain patients, and tells surprising stories about the power of inattentional blindness. But it’s a big leap from acknowledging that we sometimes extemporise dramatically to the conclusion that there is never a consistent underlying score for the tune we are humming. Chater says that if Anna Karenina were real, her political views would probably be no more settled than those of the fictional character, about whom there are no facts beyond what her author imagined. I sort of doubt that; many people seem to me to have relatively consistent views, and even where the views flip around they’re not nugatory. Chater quotes remarkable experiments on this, including one in which subjects asked political questions via a screen with a US flag displayed in the corner gave significantly more Republican answers than those who answered the same questions without the flag – and moreover voted more Republican eight months later. Chater acknowledges that it seems implausible that this one experiment could have conditioned views in a way that none of the subjects’ later experiences could do (though he doesn’t seem to notice that being able to hold to the same conditioning for eight months rather contradicts his main thesis about consistency); but in the end he sort of thinks it did. These days we are more cautious about the interpretation of psychological experiments than we used to be, and the most parsimonious explanation might be something wrong with the experiment. An hypothesis Chater doesn’t consider is that subjects are very prone to guessing the experimenter’s preferences and trying to provide what’s wanted. It could plausibly be the case that subjects whose questions were accompanied by patriotic symbols inferred that more right-wing answers would be palatable here, and thought the same when asked about their vote much later by the same experimenter (irrespective of how they actually voted – something we can’t in fact know, given the secrecy of the ballot).

Chater presents a lot of good evidence that our visual system uses only a tiny trickle of evidence; as little as one shape and one colour at a time, it seems. He thinks this shows that we’re not having a rich experience of the world; we can’t be, because the data isn’t there. But isn’t this a perverse interpretation? He accepts that we have the impression of a rich experience; given the paucity of data, which he evidences well, this impression can surely only come from internal processing and internal models – which looks like mental depth after all. Chater, I think, would argue that unconscious processing does take place but doesn’t count as depth; but it’s hard to see why not. In a similar way he accepts that we remember our old interpretations and feed them into current interpretations, even extrapolating new views, but this process, which looks a bit like reflection and internal debate, does not count as depth either. Here, it begins to look as if Chater’s real views are more commonsensical than he will allow.

But not everywhere. On feelings, Chater is in a tradition stretching back to William James, who held that our hair doesn’t stand on end because we’re feeling fear; rather, we feel fear because we’re experiencing hair-rise (along with feelings in the gut, goose bumps, and other physiological symptoms). The conscious experience comes after the bodily reaction, not before. Similar views were held by the behaviourists, of course; these reductive ideas are congenial because they mean emotions can be studied objectively from outside, without resort to introspection. But they are not very plausible. Again, we can accept that it sometimes happens that way. If stomach ache makes me angry, I may well go on to find other things to be angry about. If I go out at night and feel myself tremble, I may well decide it is because I am frightened. But if someone tells a ghost story, it is not credible that the fear doesn’t come from my conscious grasp of the narrative.

I think Chater’s argument for the non-existence of the self is perhaps his most alarming. It rests on a principle (or a dogma; he seems to take it as more or less self evident) that there is nothing in consciousness but the interpretation of sensory inputs. He qualifies this at once by allowing dreams and imagination, a qualification which would seem to give back almost everything the principle took away, if we took it seriously; but Chater really does mean to restrict us to thinking about entities we can see, touch or otherwise sense. He says we have no conscious knowledge of abstractions, not even such everyday ones as the number five. The best we can do is proxies such as an image of five dots, or of the numeral ‘5’. But I don’t think that will wash. A collection of images is not the same as the number five; in fact, without understanding what five is, we wouldn’t be able to pick out which groups of dots belonged to the collection. Chater says we rely on precedent, not principle, but precedents are useless without the interpretive principles that tell us when they apply. I don’t know how Chater, on his own account, is even aware of such things as the number five; he refuses to address the metaphysical issues beyond his own assertions.

I think Chater’s principle rules out arithmetic, let alone higher maths, and a good deal besides, but he presumably thinks we can get by somehow with the dots. Later, however, he invokes the principle again to dismiss the self. Because we have no sensory impressions of the self, it must be incoherent nonsense. But there are proxies for the self – my face in the mirror, the sound of my voice, my signature on a document – that seem just as good as the dots and numerals we’ve got for maths. Consistency surely requires that Chater either accepts the self or dumps mathematics.

As a side comment, it’s interesting that Chater introduces his principle early and only applies it to the self much later, when he might hope we have forgotten the qualifications he entered, and the issues over numbers. I do not suggest these are deliberate presentational tactics, rather they seem good evidence of how we often choose the most telling way of giving an argument unconsciously, something that is of course impossible in his system.

I’m much happier with Chater’s view of AI. Early on, he gives a brief account of the failure of the naive physics project, which attempted to formalise our ‘folk’ understanding of science. He seems to conflate this with the much wider project of artificial general intelligence, but he is right about the limitations it points to. He thinks computers lack the ‘elasticity’ of human thought, and are unlikely to acquire it any time soon.

A bit of a curate’s egg, then. A lot of decent, interesting science, with some alarming stuff that seems philosophically naive (a charge I hesitate to make because it is always possible to adduce sophisticated justifications for philosophical positions that seem daft on the face of it; indeed, that’s something philosophers particularly enjoy doing).

Probably Right

Judea Pearl says that AI needs to learn causality.  Current approaches, even the fashionable machine learning techniques, summarise and transform, but do not interpret the data fed to them. They are not really all that different from techniques that have been in use since the early days.

What is he on about? About twenty years ago, Pearl developed methods of causal analysis using Bayesian networks. These have yet to achieve the general recognition they seem to deserve (I’m afraid they’re new to me). One reason Pearl’s calculus has probably not achieved wider fame is the sheer difficulty of understanding it. A rigorous treatment involves a lot of equations that are challenging for the layman, and the rationale is not very intuitive at some points, even to those who are comfortable with the equations. The models have a prestidigitatory quality (Hey presto!) of seeming to bring solid conclusions out of nothing (but then much of Bayesian probability has a bit of that feeling for me). Pearl has now published a new book which tries to make all this accessible to the layman.

Difficult as they may be, his methods seem to have implications that are wide and deep. In science, they mean that randomised control testing is no longer the only game in town. They provide formal methods for tackling the old problem of distinguishing between correlation and causation, and they allow the quantification of probabilities in counterfactual cases. Michael Nielsen  gives a bit of a flavour of the treatment of causality if you’re interested. Does this kind of analysis provide new answers to Hume’s questions about causality?

Pearl suggests that Hume, covertly or perhaps unconsciously, had two definitions of causality; one is the good old constant conjunction we know and love (approximately, A caused B because when A happens B always happens afterwards), the other in terms of counterfactuals (we can see that if A had not happened, B would not have happened either). Pearl lines up with David Lewis in suggesting that the counterfactual route is actually the way to go, with his insights offering new formal techniques. He further thinks it’s a reasonable speculation that the brain might be structured in ways that enable it to use similar techniques, but neither this nor the details of how exactly his approach wraps up the philosophical issues is set out fully. That’s fair enough – we can’t expect him to solve everyone else’s problems as well as the pretty considerable ones he does deal with – but it would be good to see a professional philosophical treatment (maybe there is one I haven’t come across?). My hot take is that this doesn’t altogether remove the Humean difficulties; Pearl’s approach still seems to rely on our ability to frame reasonable hypotheses and make plausible assumptions, for example – but I’m far from sure.  It looks to me as if this is a subject philosophers writing about causation or counterfactuals now need to understand, rather the way philosophers writing about metaphysics really ought to understand relativity and quantum physics (as they all do, of course).

What about AI? Is he right? I think he is, up to a point. There is a large problem which has consistently blocked the way to Artificial General Intelligence, to do with the computational intractability of undefined or indefinite domains. The real world, to put it another way, is just too complicated. This problem has shown itself in several different areas in different guises. I think such matters as the Frame Problem (in its widest form), intentionality/meaning, relevance, and radical translation are all places where the same underlying problem shows up, and it is plausible to me that causality is another. In real world situations, there is always another causal story that fits the facts, and however absurd some of them may be, an algorithmic approach gets overwhelmed or fails to achieve traction.

So while people studying pragmatics have got the beast’s tail, computer scientists have got one leg, Quine had a hand on its flank, and so on. Pearl. maybe, has got its neck. What AI is missing is the underlying ability that allows human beings to deal with this stuff (IMO the human faculty of recognition). If robots had that, they would indeed be able to deal with causality and much else besides.  The frustrating possibility here is that Pearl’s grasp on the neck actually gives him a real chance to capture the beast, or in other words that his approach to counterfactuals may contain the essential clues that point to a general solution. Without a better intuitive understanding of what  he says, I can’t be sure that isn’t the case.

So I’d better read his book, as I should no doubt have done before posting, but you know…

 

No problem

The older I get, the less impressed I am by the hardy perennial of free will versus determinism. It seems to me now like one of those completely specious arguments that the Sophists supposedly used to dumbfound their dimmer clients.

One of their regulars, apparently went like this. Your dog has had pups? And it belongs to you? Then it’s a mother, and it’s yours. Ergo, it’s your mother!!!

If we can take this argument seriously enough to diagnose it, we might point out that ‘your’ is a word with several distinct uses. One is to pick out items that are your legal property; another is the wider one of picking out items that pertain to you in some other sense. We can, for example, use it to pick out the single human being that is your immediate female progenitor. So long as we are clear about these different senses, no problem arises.

Is free will like that? The argument goes something like this. Your actions were ultimately determined by the laws of physics. An action which was determined in advance is not free. Ergo, physics says none of your actions were free!!!

But there are two entirely different senses of “determined” in play here. When I ask if you had a free choice, I’m not asking a metaphysical question about whether an interruption to the causal sequence occurred. I’m asking whether you had a gun to your head, or something like that.

Now some might argue that although the two senses are distinct, the physics one over-rides the psychological one and renders it meaningless. But it doesn’t. A metaphysical interruption to the causal sequence wouldn’t give me freedom anyway; it might give me a random factor, but freedom is not random. What I want to know is, did your actions arise out of your conscious thoughts, or did external factors constrain them? That’s all. The undeniable fact that my actions are ultimately constrained by the laws of nature simply isn’t what I’m concerned with.

That constraint really is undeniable, of course; in fact we don’t really need physics. If the world is coherent at all it must be governed by laws, and those laws must determine what happens. If things happened for no reason, we could make no sense of anything. So any comprehensive world view must give us some kind of determinism. We know this well enough, because we are familiar with at least one other comprehensive theory; the view that things happen only because God wills them. This means everything is predestined, and that gives rise to just the same sort of pseudo-problems over free will. In fact, if we want we can get the same problems from logical fatalism, without appealing to either science or theology. Will I choose A tomorrow or not? Necessarily there is a truth of the matter already, so although we cannot know, my decision is already a matter of fact, and in that sense is already determined.

So fundamental determinism is rock solid; it just isn’t a problem for freedom.

Hold on, you may say; you frame this as being about external constraints, but the real question is, am I not constrained internally? Don’t my own mental processes force me to make a particular decision? There are two versions of this argument. The first says that the mere fact that mental processes operate mechanistically means there can be no freedom. I just deny that; my own conscious processes count as a source of free decisions no matter how mechanistic they may be, just so long as they’re not constrained from outside.

The second version of the argument says that while free decisions of that kind might be possible in pure theory, as an empirical matter human beings don’t have the capacity for them. No conscious processes are actually effectual; consciousness is an epiphenomenon and merely invents rationales for decisions taken in a predetermined manner elsewhere in the brain. This argument is appealing because there is, of course, lots of evidence that unconscious factors influence our decisions. But the strong claim that my conscious deliberations are always irrelevant seems wildly implausible to me. Speech acts are acts, so to truly believe this strong version of the theory I’d have to accept that what I think is irrelevant to what I say, or that I adjust my thoughts retrospectively to fit whatever just came out of my mouth (I’m not saying there aren’t some people of whom one could believe this).

Now I may also be attacked from the other side. There may be advocates of free will who say, hold on, Peter, we do actually want that special metaphysical interruption you’re throwing away so lightly. Introspect, dear boy, and notice how your decisions come from nowhere; that over and above the weighing of advantage there just is that little element of inexplicable volition.

This impression comes, I think, from the remarkable power of intentionality. We can think about anything at all, including future or even imaginary contingencies. If our actions are caused by things that haven’t happened yet, or by things that will never actually happen (think of buying insurance) that looks like a mysterious disruption of the natural order of cause and effect. But of course it isn’t really. You may have a different explanation depending on your view of intentionality; mine is briefly that it’s all about recognition. Our ability to recognise entities that extend into the future, and then recognise within them elements that don’t yet exist, gives us the ability to make plans for the future, for example, without any contradiction of causality.

I’m afraid I’ve ended up by making it sound complicated again. Let me wrap it up in time-honoured philosophical style: it all depends what you mean by “determined”…

 

Why would you even think that?

More support for the illusionist perspective in a paper from Daniel Shabasson. He agrees with Keith Frankish that phenomenal consciousness is an illusion, and (taking the metaproblematic road) offers a theory as to why so many people – the great majority, I think – find it undeniably real in spite of the problems it raises.

Shabasson’s theory rests on three principles:

  • impenetrability,
  • the infallibility illusion, and
  • the justification illusion.

Impenetrability says that we have no conscious access to the processes that produce our judgements about sensory experience. We know as a matter of optical/neurological science that our perception of colour rests on some very complex processing of the data detected by our eyes. Patches of paint or groups of pixels emitting exactly the same wavelengths of light may be perceived as quite different colours when our brains take account of the visual context, for example, but the resulting colours are just present to consciousness as facts. We have no awareness of the complex adjustments that have been made.

This point is particularly evident in the case of colour vision, where the processing done by the brain is elaborate and sometimes counter-intuitive. It’s less clear that we’re missing out on much in the way of subtle interpretive processing when we detect a poke in the eye. Generally though, I think the claim  is pretty uncontroversial, and in fact our limited access to what’s really going on has been an important part of other theories such as Scott Bakker’s Blind Brain.

Infallibilty says that we are prone to assume we cannot be wrong about certain aspects of our experience. Obviously most perceptions could be mistaken, but others, more direct, seem invulnerable to error. I may be mistaken in my belief that there is a piano on my foot, or about the fact that my toe is crushed; but surely I can’t be wrong about the fact that I am feeling pain? Although this idea has been robustly challenged, it has a strong intuitive appeal, perhaps partly out of a feeling that while we can be wrong about external stuff, mental entities are perceived directly, already present in the mind, and therefore immune from the errors that creep in during delivery of external information.

Justification is a little more subtle; the claim is that for any judgement we make, we believe there is some justification. This is not the stronger claim that there is good or adequate justification, just the view that we suppose ourselves to have some reason for thinking whatever we think.

Is that true? What if I fix my thoughts on the fourth nearest star to Earth which has only one planet orbiting it, and judge that the planet in question is smaller than Earth? If I knew more about astronomy I might have reasons for this judgement, but as matters stand, though I feel confident that the planet exists, I have no reasons for any beliefs about its size relative to Earth.

In such a case, I believe Shabasson would either point to probable justifications I have overlooked (perhaps I am making a mistaken but not irrational assumption about a correlation between size and number of planets) or more likely, simply deny that I have truly made the relevant judgement at all. I might assert that I really believe the planet is small, but I’m really only playing some hypothetical game. I think in fact, Shabasson can have what he needs for the sake of argument here pretty much by specification.

Given the three principles, various things follow. When we judge ourselves to be having a ‘reddish’ experience, we must be right, and there must be something in our mind that justifies the judgement. That thing is a quale, which must therefore exist. This follows so directly, without requiring effortful reasoning, it seems to us that we apprehend the quale directly. Furthermore, the quale must seem like something, or to put it more fully, there must be something it seems like: if there were nothing a quale were like, there would be no apparent difference between a red and a green quale; but it is of the essence that there are such differences.

What is it like? We can’t say, because in fact it doesn’t exist. Though there really are justificatory properties for our judgements about perceptions, they are not phenomenal ones; but impenetrability means we remain unaware of them. Hence the apparent ineffability of qualia. Impenetrability also gives rise to an impression that qualia are intrinsic; briefly it means that the reddish experience arrives with no other information, and in particular nothing about its relation to other things; it seems it just is. Completing the trio, qualia seem subjective because given ineffability and intrinsicality, they are only differentiable through introspection, and introspection naturally limits access to a particular single subject.

I don’t think Shabasson has the whole answer (I think, in particular, that the apparent existence of qualia has to do with the particular reality of actual experience, a quality obviously not conveyed by any theoretical account), but I think there are probably several factors that account for our belief in phenomenal experience, and he gives a very clear account of some significant ones. The use of the principle of justification seems especially interesting to me; I wonder if it might help illuminate some other quirks of human psychology.