Chess problem computers can’t solve?

A somewhat enigmatic report in the Daily Telegraph says that this problem has been devised by Roger Penrose, who says that chess programs can’t solve it but humans can get a draw or even a win.

I’m not a chess buff, but it looks trivial. Although Black has an immensely powerful collection of pieces, they are all completed bottled up and immobile, apart from three bishops. Since these are all on white squares, the White king is completely safe from them if he stays on black squares. Since the white pawns fencing in Black’s pieces are all on black squares, the bishops can’t do anything about them either. It looks like a drawn position already, in fact.

I suppose Penrose believes that chess computers can’t deal with this because it’s a very weird situation which will not be in any of their reference material. If they resort to combinatorial analysis the huge number of moves available to the bishops is supposed to render the problem intractable, while the computer cannot see the obvious consequences of the position the way a human can.

I don’t know whether it’s true that all chess programs are essentially that stupid, but it is meant to buttress Penrose’s case that computers lack some quality of insight or understanding that is an essential property of human consciousness.

This is all apparently connected with the launch of a new Penrose Institute, whose website is here, but appears to be incomplete. No doubt we’ll hear more soon.

Interesting Stuff

Picture: correspondent. Tom Clark is developing a representationalist approach to the hard problem and mental causation: see The appearance of reality and Respecting privacy: why consciousness isn’t even epiphenomenal . He borrows from Metzinger but diverges in some important respects, especially in denying the causal role of consciousness in 3rd person explanations of behavior. Tom says he’d welcome feedback.

Roger Penrose, delivering the the second Rabindranath Tagore lecture in Kolkata was surprisingly upbeat about prospects for AI, though sticking to his view that consciousness is not computational and requires some exotic quantum physics. Alas, I can’t find a transcript.

At Google, Dmitriy Genzel is attempting machine translation of poetry. Considering that the translation of poetry is demanding or even impossible for skilful human authors, you could say this was ambitious. His paper(pdf) gives some examples of what has been achieved: there’s also a review in verse.

Finally just a mention for the claim made briefly by Masao Ito that the cerebellum (normally regarded as the part of the brain that does the automatic stuff) may have an important role in high-level cognition. That would be very interesting, but don’t people sometimes have the cerebellum entirely removed? I understood this makes life difficult for them in various ways, but doesn’t seem to affect high-level processes?