Phi

Picture: Phi. I was wondering recently what we could do with all the new computing power which is becoming available.  One answer might be calculating phi, effectively a measure of consciousness, which was very kindly drawn to my attention by Christof Koch. Phi is actually a time- and state-dependent measure of integrated information developed by Giulio Tononi in support of the Integrated Information Theory (IIT) of consciousness which he and Koch have championed.  Some readable expositions of the theory are here and here with the manifesto here and a formal paper presenting phi here. Koch says the theory is the most exciting conceptual development he’s seen in “the inchoate science of consciousness”, and I can certainly see why.

The basic premise of the theory is simply that consciousness is constituted by integrated information. It stems from the phenomenological observations that there are vast numbers of possible conscious states, and that each of them appears to unify or integrate a very large number of items of information. What really lifts the theory above the level of most others in this area is the detailed mathematical under-pinning, which means phi is not a vague concept but a clear and possibly even a practically useful indicator.

One implication of the theory is that consciousness lies on a continuum: rather than being an on-or-off matter, it comes in degrees. The idea that lower levels of consciousness may occur when we are half-awake, or in dogs or other animals, is plausible and appealing. Perhaps a little less intuitive is the implication that there must be in theory be higher states of consciousness than any existing human being could ever have attained. I don’t think this means states of greater intelligence or enlightenment, necessarily; it’s more  a matter of being more awake than awake, an idea which (naturally enough, I suppose) is difficult to get one’s head around, but has a tantalising appeal.

Equally, the theory implies that some minimal level of consciousness goes a long way down to systems with only a small quantity of integrated information. As Koch points out, this looks like a variety of panpsychism or panexperientialism, though I think the most natural interpretation is that real consciousness probably does not extend all that far beyond observably animate entities.

One congenial aspect of the theory for me is that it puts causal relations at the centre of things: while a system with complex causal interactions may generate a high value of phi, a ‘replay’ of its surface dynamics would not. This seems to capture in a clearer form the hand-waving intuitive point I was making recently in discussion of Mark Muhlestein’s ideas.  Unfortunately calculation of Phi for the human brain remains beyond reach at the moment due to the unmanageable levels of complexity involved;  this is disappointing, but in a way it’s only what you would expect. Nevertheless, there is, unusually in this field, some hope of empirical corroboration.

I think I’m convinced that phi measures something interesting and highly relevant to consciousness; perhaps it remains to be finally established that what it measures is consciousness itself, rather than some closely associated phenomenon, some necessary but not sufficient condition. Your view about this, pending further evidence, may be determined by how far you think phenomenal experience can be identified with information. Is consciousness in the end what information – integrated information – just feels like from the inside? Could this be the final answer to the insoluble question of qualia? The idea doesn’t strike me with the ‘aha!’ feeling of the blinding insight, but (and this is pretty good going in this field) it doesn’t seem obviously wrong either.  It seems the right kind of answer, the kind that could be correct.

Could it?

Buy AI

Picture: chess with a machine. Kenneth Rogoff is putting his money on AI to be the new source of economic growth, and he seems to think the Turing Test is pretty much there for the taking.

His case is mainly based on an analogy with chess, where he observes that since the landmark victory of “Deep Blue” over Kasparov, things have continued to move on, so that computers now move in a sphere far above their human creators, making moves whose deep strategy is impenetrable to merely human brains. They can even imitate the typical play of particular Grandmasters in a way which reminds Rogoff of the Turing Test. If computers can play chess in a way indistinguishable from that of a human being, it seems they have already passed the ‘Chess Turing Test’. In fact he says that nowadays it takes a computer to spot another computer.

I wonder if that’s literally the case: I don’t know much about chess computing, but I’d be slightly surprised to hear that computer-detecting algorithms as such had been created. I think it’s more likely that where a chess player is accused of using illicit computer advice, his accusers are likely to point to a chess program which advises exactly the moves he made in the particular circumstances of the game. Aha, they presumably say, those moves of yours which turned out so well make no sense to us human beings, but look at what the well-known top-notch program Deep Gambule 5000 recommends…

There’s a kind of melancholy pleasure for old gits like me in the inversion which has occurred over chess; when we were young, chess used to singled out as a prime example of what computers couldn’t do, and the reason was usually given as being the combinatorial explosion which arises when you try to trace out every possible future move in a game of chess.  For a while people thought that more subtle programming would get round this, but the truth is that in the end the problem was mainly solved by sheer brute force; chess may be huge, but the computing power of contemporary computers has become even huger.

On the one hand, that suggests that Rogoff is wrong. We didn’t solve the chess problem by endowing computers with human-style chess reasoning; we did it by throwing ever bigger chunks of data around at ever greater speeds.  A computer playing grandmaster chess may be an awesome spectacle, but not even the most ardent computationalist thinks there’s someone in there. The Turing Test, on the other hand, is meant to test whether computers could think in broadly the human way; the task of holding a conversation is supposed to be something that couldn’t be done without human-style thought. So if it turns out we can crack the test by brute force (and mustn’t that be theoretically possible at some level?) it doesn’t mean we’ve achieved what passing the test was supposed to mean.

In another way, though, the success with chess suggests that Rogoff is right. Some of the major obstacles to human-style thought in computers belong to the family of issues related to the frame problem, in its broadest versions, and the handling of real-world relevance. These could plausibly be described as problems with combinatorial explosion, just like the original chess issue but on a grander scale. Perhaps, as with chess, it will finally turn out to be just a matter of capacity?

All of this is really a bit beside Rogoff’s main interest; he is primarily interested in new technology of a kind which might lead to an economic breakthrough; although he talks about Turing, the probable developments he has in mind don’t actually require us to solve the riddle of consciousness. His examples; from managing the electronics and lighting in our homes to populating “smart grids” for water and electricity, helping monitor these and other systems to reduce waste” actually seem like fairly mild developments of existing techniques, hardly the sort of thing that requires deep AI innovation at all. The funny thing is, I’m not sure we really have all that many really big, really new ideas for what we might do with the awesome new computing power we are steadily acquiring. This must certainly be true of chess – where do we go from here, keep building even better programs to play games against each other, games of a depth and subtlety which we will never be able to appreciate?

There’s always the Blue Brain project, of course, and perhaps CYC and similar mega-projects; they can still absorb more capacity than we can yet provide. Perhaps in the end consciousness is the only worthy target for all that computing power after all.