Probably Right

Judea Pearl says that AI needs to learn causality.  Current approaches, even the fashionable machine learning techniques, summarise and transform, but do not interpret the data fed to them. They are not really all that different from techniques that have been in use since the early days.

What is he on about? About twenty years ago, Pearl developed methods of causal analysis using Bayesian networks. These have yet to achieve the general recognition they seem to deserve (I’m afraid they’re new to me). One reason Pearl’s calculus has probably not achieved wider fame is the sheer difficulty of understanding it. A rigorous treatment involves a lot of equations that are challenging for the layman, and the rationale is not very intuitive at some points, even to those who are comfortable with the equations. The models have a prestidigitatory quality (Hey presto!) of seeming to bring solid conclusions out of nothing (but then much of Bayesian probability has a bit of that feeling for me). Pearl has now published a new book which tries to make all this accessible to the layman.

Difficult as they may be, his methods seem to have implications that are wide and deep. In science, they mean that randomised control testing is no longer the only game in town. They provide formal methods for tackling the old problem of distinguishing between correlation and causation, and they allow the quantification of probabilities in counterfactual cases. Michael Nielsen  gives a bit of a flavour of the treatment of causality if you’re interested. Does this kind of analysis provide new answers to Hume’s questions about causality?

Pearl suggests that Hume, covertly or perhaps unconsciously, had two definitions of causality; one is the good old constant conjunction we know and love (approximately, A caused B because when A happens B always happens afterwards), the other in terms of counterfactuals (we can see that if A had not happened, B would not have happened either). Pearl lines up with David Lewis in suggesting that the counterfactual route is actually the way to go, with his insights offering new formal techniques. He further thinks it’s a reasonable speculation that the brain might be structured in ways that enable it to use similar techniques, but neither this nor the details of how exactly his approach wraps up the philosophical issues is set out fully. That’s fair enough – we can’t expect him to solve everyone else’s problems as well as the pretty considerable ones he does deal with – but it would be good to see a professional philosophical treatment (maybe there is one I haven’t come across?). My hot take is that this doesn’t altogether remove the Humean difficulties; Pearl’s approach still seems to rely on our ability to frame reasonable hypotheses and make plausible assumptions, for example – but I’m far from sure.  It looks to me as if this is a subject philosophers writing about causation or counterfactuals now need to understand, rather the way philosophers writing about metaphysics really ought to understand relativity and quantum physics (as they all do, of course).

What about AI? Is he right? I think he is, up to a point. There is a large problem which has consistently blocked the way to Artificial General Intelligence, to do with the computational intractability of undefined or indefinite domains. The real world, to put it another way, is just too complicated. This problem has shown itself in several different areas in different guises. I think such matters as the Frame Problem (in its widest form), intentionality/meaning, relevance, and radical translation are all places where the same underlying problem shows up, and it is plausible to me that causality is another. In real world situations, there is always another causal story that fits the facts, and however absurd some of them may be, an algorithmic approach gets overwhelmed or fails to achieve traction.

So while people studying pragmatics have got the beast’s tail, computer scientists have got one leg, Quine had a hand on its flank, and so on. Pearl. maybe, has got its neck. What AI is missing is the underlying ability that allows human beings to deal with this stuff (IMO the human faculty of recognition). If robots had that, they would indeed be able to deal with causality and much else besides.  The frustrating possibility here is that Pearl’s grasp on the neck actually gives him a real chance to capture the beast, or in other words that his approach to counterfactuals may contain the essential clues that point to a general solution. Without a better intuitive understanding of what  he says, I can’t be sure that isn’t the case.

So I’d better read his book, as I should no doubt have done before posting, but you know…

 

No problem

The older I get, the less impressed I am by the hardy perennial of free will versus determinism. It seems to me now like one of those completely specious arguments that the Sophists supposedly used to dumbfound their dimmer clients.

One of their regulars, apparently went like this. Your dog has had pups? And it belongs to you? Then it’s a mother, and it’s yours. Ergo, it’s your mother!!!

If we can take this argument seriously enough to diagnose it, we might point out that ‘your’ is a word with several distinct uses. One is to pick out items that are your legal property; another is the wider one of picking out items that pertain to you in some other sense. We can, for example, use it to pick out the single human being that is your immediate female progenitor. So long as we are clear about these different senses, no problem arises.

Is free will like that? The argument goes something like this. Your actions were ultimately determined by the laws of physics. An action which was determined in advance is not free. Ergo, physics says none of your actions were free!!!

But there are two entirely different senses of “determined” in play here. When I ask if you had a free choice, I’m not asking a metaphysical question about whether an interruption to the causal sequence occurred. I’m asking whether you had a gun to your head, or something like that.

Now some might argue that although the two senses are distinct, the physics one over-rides the psychological one and renders it meaningless. But it doesn’t. A metaphysical interruption to the causal sequence wouldn’t give me freedom anyway; it might give me a random factor, but freedom is not random. What I want to know is, did your actions arise out of your conscious thoughts, or did external factors constrain them? That’s all. The undeniable fact that my actions are ultimately constrained by the laws of nature simply isn’t what I’m concerned with.

That constraint really is undeniable, of course; in fact we don’t really need physics. If the world is coherent at all it must be governed by laws, and those laws must determine what happens. If things happened for no reason, we could make no sense of anything. So any comprehensive world view must give us some kind of determinism. We know this well enough, because we are familiar with at least one other comprehensive theory; the view that things happen only because God wills them. This means everything is predestined, and that gives rise to just the same sort of pseudo-problems over free will. In fact, if we want we can get the same problems from logical fatalism, without appealing to either science or theology. Will I choose A tomorrow or not? Necessarily there is a truth of the matter already, so although we cannot know, my decision is already a matter of fact, and in that sense is already determined.

So fundamental determinism is rock solid; it just isn’t a problem for freedom.

Hold on, you may say; you frame this as being about external constraints, but the real question is, am I not constrained internally? Don’t my own mental processes force me to make a particular decision? There are two versions of this argument. The first says that the mere fact that mental processes operate mechanistically means there can be no freedom. I just deny that; my own conscious processes count as a source of free decisions no matter how mechanistic they may be, just so long as they’re not constrained from outside.

The second version of the argument says that while free decisions of that kind might be possible in pure theory, as an empirical matter human beings don’t have the capacity for them. No conscious processes are actually effectual; consciousness is an epiphenomenon and merely invents rationales for decisions taken in a predetermined manner elsewhere in the brain. This argument is appealing because there is, of course, lots of evidence that unconscious factors influence our decisions. But the strong claim that my conscious deliberations are always irrelevant seems wildly implausible to me. Speech acts are acts, so to truly believe this strong version of the theory I’d have to accept that what I think is irrelevant to what I say, or that I adjust my thoughts retrospectively to fit whatever just came out of my mouth (I’m not saying there aren’t some people of whom one could believe this).

Now I may also be attacked from the other side. There may be advocates of free will who say, hold on, Peter, we do actually want that special metaphysical interruption you’re throwing away so lightly. Introspect, dear boy, and notice how your decisions come from nowhere; that over and above the weighing of advantage there just is that little element of inexplicable volition.

This impression comes, I think, from the remarkable power of intentionality. We can think about anything at all, including future or even imaginary contingencies. If our actions are caused by things that haven’t happened yet, or by things that will never actually happen (think of buying insurance) that looks like a mysterious disruption of the natural order of cause and effect. But of course it isn’t really. You may have a different explanation depending on your view of intentionality; mine is briefly that it’s all about recognition. Our ability to recognise entities that extend into the future, and then recognise within them elements that don’t yet exist, gives us the ability to make plans for the future, for example, without any contradiction of causality.

I’m afraid I’ve ended up by making it sound complicated again. Let me wrap it up in time-honoured philosophical style: it all depends what you mean by “determined”…

 

Deep Thoughts

rosenbergblandula Gregg Rosenberg’s book “A Place for Consciousness – probing the deep structure of the natural world ” is the most ambitious metaphysical project I have come across for a long time. Not only does it offer a new view of consciousness, it suggests a new and elaborate theory of causality, and a new kind of ontology to go with it. It pushes metaphysical speculation boldly into regions which science has pretty much regarded as its own for many years, and brusquely denies the completeness of the physical account. It also embraces philosophical positions which most would regard as untenable: in this, it recalls David Chalmers, and indeed Rosenberg positions himself as operating in a kind of post-Chalmers context with some Chalmerian (Chelmersian?) leanings.

A breath of fresh air, then? I think so, though I couldn’t sign up unreservedly to the theory, and I have particular reservations about the elaborate apparatus which Rosenberg proposes to deal with causality.

Anyway, what’s it all about? The basis of the theory is the view that conscious experience – qualia in particular – cannot be satisfactorily accommodated within the physicalist account. Rosenberg proposes an analogy with Conway’s “Game of Life”. In this game, we have a world consisting of an indefinitely large grid. Each cell can be “off” or “on”. Some simple rules about adjacent cells determine, for each successive state of the world, which cells will be on or off. It turns out that this simple set-up gives rise to patterns which evolve and behave in complex and interesting ways. We can even construct a huge pattern which acts as a Turing machine.

Now, says Rosenberg, in the Life world, there is nothing but bare differences. You may be able to generate hugely complex entities within the Life world – perhaps even life itself: but there’s no way these bare differences could entail subjective experience. Yet subjective experience is undeniable – qualia are an observable fact. Now when you get right down to it, the world sketched out by physics is also a matter of bare differences. The fundamentals are a little more complex than in the Life world, but in the end you come down to a similar kind of contentless data. It follows that conscious experience is not entailed by physics, and it must therefore be entailed by something else.

Linespace

 

bitbucket The Game of Life is fascinating and instructive, but I think there’s a bit of trickery going on here. Rosenberg only ever talks about small sections of the Life world – of course you can’t do much with a grid the size of a chess board. He mentions the Turing machine, which requires a huge grid, of course, but even that is a minute fraction of the size of the array you’d need to model the full working detail of even one human brain. The Life grid needed for even a small community of people just beggars the imagination, and at that point it ceases to be convincing that the thing is too simple to support the kind of mental experience we normally have.

At the end of the day, it’s just another appeal to intuition we’re dealing with here. Rosenberg insists that he’s giving real evidence, but if all you have for evidence is the way something looks to you, I say we’re just sharing intuitions, and mine are different from his.

Linespace

 

blandula  I don’t like the argument much, either, but for different reasons. I see why you’re inclined to reject subjective evidence – “subjective” has always been a derogatory term in science. But if it’s the nature of phenomenal experience we’re dealing with, how can you disallow subjective evidence?

Still, I don’t think it’s legitimate to argue that what’s true of a simple game must be true of reality, too. Life is a world only metaphorically; it seems doubtful to me that such a world is really possible (even in the required, fantastically outré, sense of the word “possible”). At least, if it’s to be real, there are going to be some problems about preserving identities between the successive, independent moments which constitute time in the game. And that’s one of the key points: the Life world is, by specification, a discrete-state world consisting of a binary grid. It is completely computational. The real world, by contrast, is messily continuous and full of non-computable stuff. This is particularly relevant because (according to me and many others) consciousness and qualia are among those non-computable features. Arguing from Life to reality just begs the question.

However, as a matter of fact, I think Rosenberg’s incredulity is justified. How can mere physical facts entail subjective experience?

Linespace

 

bitbucket It’s obvious that physical facts entail mental facts. Rosenberg chooses to defend his views with some very subtle and sophisticated argumentation about esoteric philosophical points, but we’re not dealing with esoteric matters here. A smack round the head with a spade is a physical fact, and entails plenty of consequences for your so-called “qualia”. Less brutally, having a red object stuck in front of my eyes in good lighting conditions entails an experience of redness in my mind. I know there’s plenty of scope for quibbling about the exact nature of the counterfactual conditionals and all that, but at the end of the day are you going to say there’s no entailment between the physical facts and mental experiences? That’s going to disconnect your experiences from the world altogether, isn’t it? You could take shelter in a restricted sense of “entailment”, but to his credit, Rosenberg quite rightly argues that we need to understand entailment in a broad sense here – certainly much wider than formal or logical entailment, anyway.

But look – the last century or so has seen an accelerating accumulation of experimental results which show just how closely our mental life depends on the physical operation of our brain: yet none of this seems to have impinged on Rosenberg. He argues from a perspective that is almost medieval. We know better than that.

Linespace

 

blandula  No, no! You talk as though he were arguing for dualism. Rosenberg isn’t trying to disconnect the physical and mental worlds: he’s just pointing out that there is more to be said about the world than the bare facts of physics. Remember this is pure physics we’re talking about. I think any unprejudiced observer, even a thorough materialist, would accept that there are aspects of the world which cannot be captured by talking about basic quantum physics. I realise some of the arguments are a bit sophisticated for your taste, but isn’t it true, as Rosenberg says, that if conscious experience were directly entailed by simple physics, it would be, in some sense, a free lunch? The idea of getting an additional phenomenal world for free in this sense really is hard to swallow, I think, and I find the argument on this persuasive.

Anyway, the second stage of the argument is a consideration of panexperientialism. This is, if you like, a weaker version of panpsychism. It’s not that everything has a mind of its own, but rather that everything has a limited share of simple experience. Rosenberg distinguishes between the kind of full subjective experience human beings have, complex and infused with cognition, and the kind of tiny spark of subjectivity an inanimate particle might be thought to have. Ultimately, his theory sets out a way of building higher-level entities out of these tiny sparks.

There’s a curious use here of Ned Block’s Chinese Nation argument. Block proposed that each Chinese citizen could be set to reproducing the behaviour of single neuron in a brain: would the resulting higher-level entity really be conscious? (It has been pointed out that in fact the population of China is nothing like as large as the number of neurons in a human brain, and the example has an unfortunate racial tinge, especially taken in conjunction with Searle’s Chinese room argument.) Rosenberg says yes, and uses the argument to show that experiencing entities could exist on several levels. For reasons which are not completely clear to me, he regards this as a problem – if experience can happen at different levels, he feels the only logical stopping points are either consciousness as a property of every atom, or consciousness as a property only of the cosmos as a whole. He suggest that there is a “boundary problem” about why consciousness has the limits it does. The existence of consciousness at a middle level therefore needs explanation – but surely the argument actually shows that it could equally well exist at various levels? Rosenberg, of course, ultimately wants to offer an explanation of how low-level experience can be built up into higher-level structures.

Linespace

 

bitbucket Poor old Occam must be spinning in his grave at this point: but can I just spool the argument back for a moment? Rosenberg began by arguing that nothing like bare differences in the world could entail the fantasmagoria of subjective experience, right? But now – and only now – he starts to suggest that subjective experience might be made up of minute glints of experience. Now it seems to me that these tiny sparks of experience look a lot more like the kind of thing you might get from bare differences, and if Rosenberg had started with them his argument would have looked much less persuasive.

There’s something a bit shifty about his discussion of panexperientialism, anyway. I’d like it a lot better if he came out thumping the table and declaring that panexperientialism is true, and here’s why. Instead, he sort of argues that there’s no reason why it couldn’t be true – maybe it’s even likely? The real reason he wants us to accept the plausibility of panexperientialism is that he wants it as the foundation for his theory, but in itself there are lots of reasons to dismiss it. One, of course, is that it adds an enormous number of experiencing entities to the world for no particular reason, and hence offends against parsimony – though parsimony doesn’t seem to be a principle Rosenberg values very much. Second, once again, we know quite well that the functional properties of the brain are closely associated with our ability to have experiences – even quite simple ones. A certain minimum of structure is necessary to have those functional qualities, and single particles certainly don’t have that minimum. Rosenberg argues against functionalism itself, but you don’t have to think that functional properties constitute consciousness in order to see that it depends on them.

Linespace

 

blandulaI think it’s true that Rosenberg basically wants panexperientialism for the sake of the theory he builds on it, but why not? If the theory accounts for consciousness and causality, then it’s well worth it.

The next step in the argument, in any case, is an assault on causality. Rosenberg launches an attack on what he characterises as Humean views. I have to say that Hume comes over here as a dogmatic figure rather at odds with the gently devastating agnostic I’m familiar with, but causality undoubtedly remains one of the great mysteries. Rosenberg wants us to unscramble some of our assumptions and stop thinking purely in terms of causal responsibility. He proposes the more general idea of causal significance, and wants to deal separately with effective and receptive causal properties. The physical account, he suggests, deals only with effective properties, and is hence one-sided.

Linespace

 

bitbucket Yes – according to him, we’re all talking about effective causal properties. Actually, I think that’s another mediaevalism, and most of us are not talking about causal properties at all – they sound worryingly vitalistic to me. In another place Rosenberg points out, more accurately, I think, that the account given by physics doesn’t actually make use of the concepts of cause and effect as such – it simply describes certain regularities in the space-time continuum. Now I think the normal view is to see cause and effect as simply a matter of an arbitrary cross-section or a line across this continuum. No particular line is privileged; it’s just a matter of what you happen to find salient or interesting at the time. The whole idea of “causal powers” is redundant. And then he goes on to say that receptive properties are connections? What does that mean? How can receptive causal properties be connections?

Linespace

 

blandula  That is really the key to the system. A simple model would have individual elements each with its own effective and receptive facets, but as I understand it, Rosenberg prefers to see receptive properties as properties which connect individual elements. The complexes so formed have both receptive and effective properties and constitute “natural individuals”. Causal relations between individuals and complexes help to impose constraints on the possible states of the component individuals, with the system tending towards higher levels of determinateness where possible. I must admit that the apparatus proposed here is rather complex and the motivation for some of the details is not always clear to me, so I might be misrepresenting the theory. It seems to be Rosenberg’s idea that phenomenal experience is the ultimate substrate or “carrier” of physics itself. Moreover, the concatenation of simple elements into higher level complexes eventually gives rise to true consciousness: the Consciousness Hypothesis tells us that “Each individual consciousness carries the nomic content of a cognitively structured, high-level natural individual. Conscious experience is experience of he total constraint structure active in the receptive field of an individual.”

Linespace

 

bitbucket Frankly, that seems to me just an over-developed and unduly obscure version of a Higher Order theory. Causation looks to me like a primitive – one of those basic concepts you can’t analyse. When you try to do it, the primitive keeps popping up in your explanation, reducing it to circularity. Don’t you think that happens here? These individuals which get grouped together – they are imposing constraints on each other, but doesn’t that mean they are causing each other to be one way and not another? Yet we are supposed to be below the level of cause and effect here – we’re meant to be explaining how causes work!

The real killer is this. Rosenberg set out to explain qualia, but at the end of the day it seems to me your real qualophile would say: yes, that’s all very interesting, Gregg – thing is, I can imagine all of that happening without my actually experiencing the real redness of red. I don’t see anything in your theory which actually catches the vivid reality of subjective experience. Now of course, in my eyes all talk of qualia is so much hot air, but I don’t see why that would be any less plausible than the case for qualia was in the first place.

Linespace

 

blandulaI think the impression of circularity arises from your using an unduly loose sense of “cause”, and I don’t see how anyone could read a theory about relations between subjective experiences without seeing how it relates to qualia.

I don’t buy the theory completely myself, as a matter of fact, but to me it’s a very welcome piece of radical new thinking, and unlike some others, this is a book I intend to read again.