Probably Right

Judea Pearl says that AI needs to learn causality.  Current approaches, even the fashionable machine learning techniques, summarise and transform, but do not interpret the data fed to them. They are not really all that different from techniques that have been in use since the early days.

What is he on about? About twenty years ago, Pearl developed methods of causal analysis using Bayesian networks. These have yet to achieve the general recognition they seem to deserve (I’m afraid they’re new to me). One reason Pearl’s calculus has probably not achieved wider fame is the sheer difficulty of understanding it. A rigorous treatment involves a lot of equations that are challenging for the layman, and the rationale is not very intuitive at some points, even to those who are comfortable with the equations. The models have a prestidigitatory quality (Hey presto!) of seeming to bring solid conclusions out of nothing (but then much of Bayesian probability has a bit of that feeling for me). Pearl has now published a new book which tries to make all this accessible to the layman.

Difficult as they may be, his methods seem to have implications that are wide and deep. In science, they mean that randomised control testing is no longer the only game in town. They provide formal methods for tackling the old problem of distinguishing between correlation and causation, and they allow the quantification of probabilities in counterfactual cases. Michael Nielsen  gives a bit of a flavour of the treatment of causality if you’re interested. Does this kind of analysis provide new answers to Hume’s questions about causality?

Pearl suggests that Hume, covertly or perhaps unconsciously, had two definitions of causality; one is the good old constant conjunction we know and love (approximately, A caused B because when A happens B always happens afterwards), the other in terms of counterfactuals (we can see that if A had not happened, B would not have happened either). Pearl lines up with David Lewis in suggesting that the counterfactual route is actually the way to go, with his insights offering new formal techniques. He further thinks it’s a reasonable speculation that the brain might be structured in ways that enable it to use similar techniques, but neither this nor the details of how exactly his approach wraps up the philosophical issues is set out fully. That’s fair enough – we can’t expect him to solve everyone else’s problems as well as the pretty considerable ones he does deal with – but it would be good to see a professional philosophical treatment (maybe there is one I haven’t come across?). My hot take is that this doesn’t altogether remove the Humean difficulties; Pearl’s approach still seems to rely on our ability to frame reasonable hypotheses and make plausible assumptions, for example – but I’m far from sure.  It looks to me as if this is a subject philosophers writing about causation or counterfactuals now need to understand, rather the way philosophers writing about metaphysics really ought to understand relativity and quantum physics (as they all do, of course).

What about AI? Is he right? I think he is, up to a point. There is a large problem which has consistently blocked the way to Artificial General Intelligence, to do with the computational intractability of undefined or indefinite domains. The real world, to put it another way, is just too complicated. This problem has shown itself in several different areas in different guises. I think such matters as the Frame Problem (in its widest form), intentionality/meaning, relevance, and radical translation are all places where the same underlying problem shows up, and it is plausible to me that causality is another. In real world situations, there is always another causal story that fits the facts, and however absurd some of them may be, an algorithmic approach gets overwhelmed or fails to achieve traction.

So while people studying pragmatics have got the beast’s tail, computer scientists have got one leg, Quine had a hand on its flank, and so on. Pearl. maybe, has got its neck. What AI is missing is the underlying ability that allows human beings to deal with this stuff (IMO the human faculty of recognition). If robots had that, they would indeed be able to deal with causality and much else besides.  The frustrating possibility here is that Pearl’s grasp on the neck actually gives him a real chance to capture the beast, or in other words that his approach to counterfactuals may contain the essential clues that point to a general solution. Without a better intuitive understanding of what  he says, I can’t be sure that isn’t the case.

So I’d better read his book, as I should no doubt have done before posting, but you know…

 

11 thoughts on “Probably Right

  1. Peter

    It seems to me that maths doesn’t have the information to solve the causal/correlative issue. What frequently happens with AI techniques is that they are so stuffed with adhoc intelligence and nudges from “unartificial” intelligence (humans) that it’s ludicrous to credit it with much intelligence in the first place. It you are looking for correlations in big data sets the possible combinations are so infinite that one of the first steps in AI is for UI (unartificial intelligence – humans) to have guesses about what data is more likely to correlate than others. That requires semantic understanding of the data and a wider awareness of how they might interact.

    In other words, the difference between correlation and cause is of no mathematical significance. It arises solely through the connections conceived of in a theoretical framework governing the data and is a semantic exercise, not a syntactical one.

    J

  2. The methods Pearl talks about have been used for some time now in the field of speech recognition. And making good progress, it seems to me.

  3. “philosophers writing about metaphysics really ought to understand relativity and quantum physics” – hear hear!

    A lot of social science research seems to be adopting Pearl’s methods too. That, and the replication “crisis”, are having very salutary effects (IMO). (If you’ve had a problem from birth, and you suddenly notice it, is that a “crisis”?)

  4. If Mr. Pear’s listening-seeing infer the human instinct to survive…
    …then this may be the language challenge to AI-calculation survival, he is concerned about…

  5. …If your older you see you leave letters off words sometimes “Pearl”, sorry ag…
    1.John, how about “HI” for human intelligence…causality unbounded…

  6. Pingback: Probably Right – Health and Fitness Recipes

  7. I also think Pearl is “right-ish” too, and I certainly share his ambivalence to machine learning. It is curve fitting on steroids and though it does have the sheen of practical progress, it mustn’t be mistaken for advance in general AI; it will hit a wall, just as expert systems and other fad computational advances have. But their utility is beyond dispute. They’re adept at finding patterns of meaning in our world and our language–basically our fingerprints on reality–but that shouldn’t be mistaken for understanding ourselves.

    Actual GAI will have to master not only modelling our world, but motivating our (robots’) actions and our quest for understanding. What motivates us to do things? How much of that is dependent on emotions and to what extent can those just be stubbed out with computational placeholders? How do we use factual and counterfactual reasoning about possible worlds and simulations to make decisions. How do we converge on decisions?

    I’m not sure how much progress AI has made on any of that in the last thirty or forty years. In a way, I think AI was closer to the heart of the matter back in, say, the 70s. It derailed on the 80s with the ambitions of expert systems and continues to this day with machine learning.

  8. Clark Glymour is the philosopher who has worked on this kind of stuff since the 1980s – he has developed a few algorithms for assessment of causation in structural equation/causal graphical models. Causal inference in structural equations goes back to Sewall Wright’s 1918 paper.

    https://www.aaai.org/ocs/index.php/AAAI/AAAI15/paper/viewFile/9686/9417
    Bochman & Lifschitz [2015] use McCain and Turner’s causal calculus (a non-monotonic logic) of to reexpress some of Pearl’s models:
    ‘the causal calculus is a two-layered construction. On the first level it has the underlying causal logic, a fully logical, though nonclassical, formalism of causal inference relations that has its own (possible worlds) logical semantics… Above this layer, however, the causal calculus includes a nonmonotonic “overhead” that is determined by the corresponding nonmonotonic semantics’.

    The figure above of smoking and lung cancer is R.A. Fisher’s alternative model for the correlation between the two – that there was a cancer-prone anxious personality type who tended to take up smoking. It was pointed in the 1950s-60s that such a confounder would have to be 10-50 fold more powerful than the apparent direct effect of smoking – improbably large. Just one example of how one assesses causation in a non-experimental setting.

  9. From Pearl :-


    We’re going to have robots with free will, absolutely. We have to understand how to program them

    Had to laugh.

    Maybe you can use ‘feedback loops’ as all the old AI humbugs used to say.

    JBD

Leave a Reply

Your email address will not be published. Required fields are marked *