Archive for January, 2011

Picture: Stephen Hawking. Having (sort of) criticised philosophers for their relatively undisciplined ways with terminology, it seems only fair to balance things up by noting a possible weakness of scientists, and I suggest impatience.  For scientists it sometimes seems that the final resolution of any great problem cannot be more than ten years away – twenty at the outside.  Turing’s suggestion that thinking machines would take about fifty years is an intolerably long-term forecast by these standards – why, we might be dead by then!  Unlike philosophers, scientists don’t seem content with shedding a small amount of light on a problem which was first seriously addressed by the civilisation before last, and will certainly take at least a few more centuries to clarify to any great extent.  Such sluggishness is the sign, in their eyes, that philosophy is dead.

That was the view taken by Stephen Hawking and his co-author Leonard Mlodinow in The Grand Design, at any rate. I have noticed as an empirical matter that when someone vigorously criticises philosophy, they are generally about to offer us some, and the rule does not fail in this instance:  besides offering us some choice new a priori metaphysics, Hawking yields again to his predilection for putting Kant straight on a couple of points.

In fact, although the book gives a general potted history of physics (not bad apart from the ghastly jokes) , Hawking and Mlodinow are ultimately out to answer the fundamental questions of metaphysics: why is there anything? And why this?

The answer comes in two parts. There is something, because there’s nothing to prevent a universe arising so long as certain requirements are balanced. Positive energy has to be balanced with negative energy; fortunately gravity provides negative energy of the kind that, for whole universes at a time, will serve to balance all the positive energy.

Why is it that there has to be this balance? Hawking and Mlodinow subscribe, it seems, to the old philosophical principle that nihilo ex nihil fit, or nothing will come of nothing, as King Lear put it. If things could appear out of nothing, then they might do so at any time or place, and the world would be incoherent: therefore, they can’t. This principle has been part of the essential bootstrapping of many metaphysical theories although it’s difficult to show convincingly why the world can’t be incoherent (we just don’t like the idea) and particularly difficult to show that it couldn’t be just a little bit incoherent in certain cases and places and ways without having to descend into complete unintelligibility.

At any rate, the view offered here is that so long as the energies balance, the universe that springs into existence cancels out in theory and is therefore equivalent to nothing, so we’re in the clear. It’s a bit like pointing out that we can’t create money out of nothing, but so long as there’s a debt which matches the cash in our hands, the laws of the financial universe are satisfied. Handily it seems that the balancing of energies which allows whole universes to appear does not work within the universe, so that arbitrary entities cannot appear within the cosmos.

So far so good, if a bit skimpy; Hawking and Mlodinow don’t give much attention to the question of whether there might be other constraints on the existence of universes (so that the balancing of energies might be a necessary but not sufficient condition of their existence); they seem to assume that there aren’t. When we generate cash in exchange for a debt, we normally have a banker to satisfy, too; might not possible universes also face some additional hurdles before springing into existence? If not, don’t we face the prospect of an incoherent series of slightly different universes, and is that really any different from a single universe incoherent in itself (in one case events are indeterminable because they’re indeterminate in themselves; in the other they’re indeterminable because you can’t determine which universe you’re in)?

They don’t give much attention either to the question of different arrangements that might satisfy the balancing requirement. I got the impression that Hawking thinks something like our matter/energy entities and something very like gravity are the only real possibilities (rather in the way that you could incur debts in terms of cowrie shells or quatloos, but any medium of exchange is essentially money). There may be reasons for thinking this, but it would be good to know what they are.  Is it a meaningless question to ask whether the cosmic balancing could be carried out in terms of say, ‘left and right’ or ‘qwz and unqwz’ rather than positive and negative? Perhaps, but then could a universe pull off a dual or triple constitution by achieving a balance of positive and negative values along two or three axes?

One reason Hawking and Mlodinow don’t waste any time tidying up these loose ends is that they are relying on the second part of the argument, which explains why we’ve got this particular universe: the answer is the dreaded anthropic principle.  The anthropic principle says the universe was bound to be one that was suitable for us to live in; there are strong and weak versions. Hawking and Mlodinow say the weak version dictates only our environment while the strong one governs the laws of nature too.  I don’t think that’s quite right, although various statements of the difference have been offered. The weak version of the principle, as I understand it, is purely about appearances. It says that the world was bound to look to an observer like a place where observers could exist; but it’s nothing to get excited about, any more than we should get excited about the lucky fluke that we were born on a planet that supports human life. The strong version, much more controversial, says that our existence has an actual causal or constitutive effect on the universe, and this is what Hawking and Mlodinow seems to be going for.

Reviews of the book generally highlighted the fact that Hawking had broken up explicitly with God. In the past he indulged the old fellow good-humouredly, rather as he might have done with a superannuated colleague, seeing no reason to brusquely attack his possibly-unjustified reputation and even speaking gravely of knowing his mind. Now, suddenly, he has no time for God.

The reason is actually quite clear: in order to make the anthropic principle seem plausible, Hawking and Mlodinow spend some time emphasising how exquisitely the fundamental constants and constitution of the universe are set up to create just those knife-edge conditions which make humanity possible; but that nice adjustment can be read another way. It’s as though Hawking were making a speech about how no intelligent physicist can fail to be impressed by how exactly and non-randomly the universe has been designed for human beings; glancing at the audience he notices to his horror that the wrong people are nodding and hurriedly clarifies that the universe may be exquisitely designed, but for heaven’s sake, not by God!  Hawking and Mlodinow are a little sheepish about the nature of the anthropic principle: this may sound like philosophy, they admit: I’m afraid it’s rather worse than that – it sounds like theology.

They seek to defend the status of the principle as a scientific hypothesis, claiming that it leads to falsifiable predictions: for example, about the age of the universe. But doesn’t seem to work; what we really do is deduce that the universe as we observe it could only be a certain age – the fact that we could only exist in a universe of this kind is another matter, established separately. From our mere existence we could not deduce the age of the universe at all, and the estimate of its age follows from observations to which our existence is actually irrelevant. But hey, that’s OK – not everything has to be science.

Actually the case that Hawking and Mlodinow make for the precision engineering of the universe is not totally convincing either. Among other things they put forward the remarkable precision of the cosmological constant: but the cosmological constant is an arbitrary number chosen to make the sums come out right, with no other justification: it’s there to fill a gap until a proper theory comes along. The only surprising thing about it is that physicists should be so impatient that they’d rather have an open lash-up like that than accept that for the time being that they don’t understand the movement of galaxies. Impatience is similarly at work elsewhere; is it better to wonder at the inexplicable precision of theoretical constants, or hope that one day we might find an explanation for them? Would it have been better science if we’d rested on our laurels when the periodic table was established, contemplating in wonder how the elements had been arranged in such a neat way, sagely remarking that if they hadn’t been arranged numerically we probably wouldn’t be here, and that must surely be the reason for it?

One other problem with the strong form of the anthropic principle is that it requires that our present existence can reach back and influence the past of the universe.  There is normally a strong presumption that the present cannot affect the past: if it does, then that past will change the present itself, and we get either a vicious circle or some kind of uncontrolled spiral and the world becomes incoherent again, because any event is subject to arbitrary revision (See how useful it is if the universe is not allowed to be incoherent!). Now Hawking and Mlodinow invoke the two-slit experiment (apparently it has now been performed successfully not just with photons, but with buckyballs, actual large molecules). You probably know about this famously perplexing business; for present purposes the key point is that if we look at where the particles are, their path changes. It looks as if our intervention now has somehow changed the direction they set off in just before: or if they’re streaming in from a distant star, not just before, but long ages ago.  Now I’m not sure that it’s right to interpret the experiment as showing that we can change the past, but even if it is, there’s a significant difference when it comes to influencing the constitution of the universe, of which the observer is inevitably a part: at the least it seems that in that case there’s a particular problem of circularity.

There is, of course, another oddity about anthropicism: it seems to say that in the end the explanation for reality is to be found, not in the external world but in our own consciousness (phew – bet you thought I was never going to mention that). Now it’s not the creationists in the audience who are nodding, but the idealists, the panpsychists, and perhaps even the solipsists; all rigorous thinkers but surely not the friends Hawking and Mlodinow were expecting for their proclaimed philosophy of  ‘model-dependent realism’?

Of course all this is intended to clear the way for M-theory. The kind of cosmic balancing Hawking and Mlodinow want from gravity requires supersymmetry and M-theory can provide it. Unfortunately M-theory, we’re told, is not a grand unification of the kind we used to hope for: it turns out those probably don’t work. But so what, say Hawking and Mlodinow: after all, you can’t have a map that shows the whole world (they seem to have an unusually jaundiced view of Mercator’s projection), so why should we expect a single theory that accounts for everything? Instead we can have a family of different approaches to apply in different areas, just as we have different maps for different areas of the world. If we can’t have a comprehensive final theory, let’s take what we can have and proclaim that instead.

Well, the other approach might be to restrain our impatience and wait a bit to see what new insights come up. Science still seems to have a few problems to clear up; Hawking and Mlodinow mention a few of these, and they also describe Ptolemaic astronomy. The thing about Ptolemaic astronomy is that it actually worked rather well; if anything, the maths worked better than it did for the Copernican system, at least to begin with. The problem was that Ptolemaic astronomy was full of strange entities it was difficult to believe in and arbitrary values inserted simply to make the sums come out right. A change of paradigm was needed, and perhaps one or two small ones are needed now.

In fairness, I can understand the impatience for answers. It isn’t necessarily a vice – and in a man like Hawking, who has spent so long with his apparent life-expectancy hovering only a little above zero, it is surely particularly understandable. But isn’t there something depressing about the idea that philosophy is dead and science all but finished? Isn’t there something more appealing in the idea that there’s plenty more science to come yet, or as Newton put it:

I seem to have been only like a boy playing on the sea-shore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.

I’m sorry the site was out of action for a couple of days recently – I’ll do my best to ensure it doesn’t happen again.

Peter

Picture: correspondent. Tom Clark is developing a representationalist approach to the hard problem and mental causation: see The appearance of reality and Respecting privacy: why consciousness isn’t even epiphenomenal . He borrows from Metzinger but diverges in some important respects, especially in denying the causal role of consciousness in 3rd person explanations of behavior. Tom says he’d welcome feedback.

Roger Penrose, delivering the the second Rabindranath Tagore lecture in Kolkata was surprisingly upbeat about prospects for AI, though sticking to his view that consciousness is not computational and requires some exotic quantum physics. Alas, I can’t find a transcript.

At Google, Dmitriy Genzel is attempting machine translation of poetry. Considering that the translation of poetry is demanding or even impossible for skilful human authors, you could say this was ambitious. His paper(pdf) gives some examples of what has been achieved: there’s also a review in verse.

Finally just a mention for the claim made briefly by Masao Ito that the cerebellum (normally regarded as the part of the brain that does the automatic stuff) may have an important role in high-level cognition. That would be very interesting, but don’t people sometimes have the cerebellum entirely removed? I understood this makes life difficult for them in various ways, but doesn’t seem to affect high-level processes?

Picture: pyramid of wisdom. Robots.net reports an interesting plea (pdf download) for clarity by Emanuel Diamant at the the 3rd Israeli Conference on Robotics. Robotics, he says, has been derailed for the last fifty years by the lack of a clear definition of basic concepts: there are more than 130 definitions of data, and more than 75 definitions of intelligence.

I wouldn’t have thought serious robotics had been going for much more than fifty years (though of course there are automata and other precursors which go much further back), so that sounds pretty serious: but he’s clearly right that there is a bad problem, not just for robotics but for consciousness and cognitive science, and not just for data, information, knowledge, intelligence, understanding and so on, but for many other key concepts, notably including ‘consciousness’.

It could be that this has something to do with the clash of cultures in this highly interdisciplinary area.  Scientists are relatively well-disciplined about terminology, deferring to established norms, reaching consensus and even establishing taxonomical authorities. I don’t think this is because they are inherently self-effacing or obedient; I would guess instead that this culture arises from two factors: first, the presence of irrefutable empirical evidence establishes good habits of recognising unwelcome truth gracefully; second, a lot of modern scientific research tends to be a collaborative enterprise where a degree of consensus is essential to progress.

How very different things are in the lawless frontier territory of philosophy, where no conventions are universally accepted, and discrediting an opponent’s terminology is often easier and no less prestigious than tackling the arguments. Numerous popular tactics seem designed to throw the terminology into confusion.  A philosopher may often, for instance, grab some existing words  – ethics/morality, consciousness/awareness, information/data, or whatever – and use them to embody a particular distinction while blithely ignoring the fact that in another part of the forest another philosopher is using the same words for a completely different distinction. When irreconcilable differences come to light a popular move is ‘giving’ the disputed word away:”Alright, then, you can just have ‘free will’ and make it what you like: I’m going to talk about ‘x-free will’ instead in future. I’ll define ‘x-free will’ to my own satisfaction and when I’ve expounded my theory on that basis I’ll put in a little paragraph pointing out that ‘x-free will’ is the only kind worth worrying about, or the only kind everyone in the real world is actually talking about”.  These and other tactics lead to a position where in some areas it’s generally necessary to learn a new set of terms for every paper: to have others picking up your definitions and using them in their papers, as happens with Ned Block’s p- and a-consciousness, for example, is a rare and high honour.

It’s not that philosophers are quarrelsome and egotistical (though of course they are);  it’s more that the subject matter rarely provides any scope for pinning down an irrefutable position, and is best tackled by single brains operating alone (Churchlands notwithstanding).

Diamant is particularly exercised by problems over ‘data’ , ‘information’, ‘knowledge’, and ‘intelligence’.  Why can’t we sort these out? He correctly identifies a key problem: some of these terms properly involve semantics, and the others don’t (needless to say, it isn’t clearly agreed which words fall into which camp).  What he perhaps doesn’t realise clearly enough is that the essential nature of semantics is an extremely difficult problem which has so far proved unamenable to science.  We can recognise semantics quite readily, and we know well enough the sort of thing semantics does; but exactly how it does those things remains a cloudy matter, stuck in the philosophical badlands.

If my analysis is right, the only real hope of clarification would be if we could come up with some empirical research (perhaps neurological, perhaps not) which would allow us to define semantics (or x-semantics at any rate), in concrete terms that could somehow be demonstrated in a lab. That isn’t going to happen any time soon, or possibly ever.

Diamant wants to press on however, and inevitably by doing so in the absence of science he falls into philosophy: he offers us implicitly a theory of his own and – guess what? Another new way of using the terminology. The theory he puts forward is that semantics is a matter of convention between entities. Conventions are certainly important: the meaning of particular words or symbols is generally a matter of convention; but that doesn’t seem to capture the essence of the thing. If semantics were simply a matter of convention, then before God created Adam he could have had no semantics, and could not have gone around asking for light; on the other hand, if we wanted a robot to deal with semantics, all we’d need to do would be to agree a convention with it or perhaps let it in on the prevailing conventions. I don’t know how you’d do that with a robot which had no semantics to begin with, as it wouldn’t be able to understand what you were talking about.

There are, of course, many established philosophical attempts to clarify the intentional basis of semantics. In my personal view the best starting point is H.P. Grice’s theory of natural meaning (those black clouds mean rain); although I think it’s advantageous to use a slightly different terminology…