Linespace
Chess 
Linespace

22 May 2005

home

Lewis chess piece

Computers playing chess must be about their most celebrated achievement in the fifty-odd years since Alan Turing launched the modern quest for artificial intelligence. In a way this is odd: chess is only a game, after all - what does it matter? One reason must be the high social status of chess and its association with esoteric intellectual prowess; no-one would have been impressed to the same extent by successful poker-playing robots, in spite of the formidable challenges that raises.

But a more important reason is probably the way chess had previously been used as the paradigmatic example of something computers couldn't do. If we go back as far as the 1960s, a standard account would explain that computers were very quick at maths, but incapable of certain other tasks. The only way they could play chess, for example, was by exhaustively considering all possible moves and their consequences: an astronomical task which no machine, even in theory, could possibly deal with. It was usual to illustrate the way permutations grew rapidly out of control by telling the old legend about the inventor of chess. According to the story, he presented the game to the king, who was so delighted he offered the inventor any reward he chose. The inventor, with apparent modesty, asked for a quantity of wheat: one grain on the first square of the chessboard, two on the next, and so on. Of course, it turns out that the quantity of wheat involved by the time we reach the square 64 is in fact colossal, far beyond the capacity of any mere kingdom to supply: it has been variously described as several hundred times the world's current annual production; a quantity which would require a barn about 25 miles square and 1000 feet high; or a quantity which, counted out grain by grain, would take about 584 billion years to deliver. Such is the power of geometric progression. The inventor got his head cut off instead.

Linespace

Those 60s writers, however, underestimated the scope for progress in two important respects. First, even computers don't have to exhaust all possible moves: more sophisticated algorithms allow more efficient problem-solving. Second, they didn't take account of Moore's Law. First stated in 1965, this was originally about the number of transistors you could get into an integrated circuit, but it is now generally interpreted as the wider view that computing power itself increases geometrically, doubling periodically. It was as though the king in the story had come up with a magic grain of wheat which produced two identical offspring, each of which went on to do the same, and so on.

For some time it was an interesting question as to whether chess would eventually be beaten by clever new programming which let the computer play in a more human style, or by the simple advance of brute-force number-crunching power. In practice, it became increasingly clear that some combination of the two was inevitably going to work one day. The sceptical view died hard though; Hubert Dreyfus notoriously disparaged the chess-playing potential of computers in his 1972 book 'What computers can't do', and has been the object of gleeful mockery more or less ever since. He denies ever saying that computers would never be able to play chess: rather, he remarked that at the time of writing they still couldn't even play at a good amateur level. Dreyfus has a sophisticated theory about the limitations of AI, along Heideggerian lines: essentially he denies the possibility of formalising general-purpose human cognition, which in his view really consists of a complex mixture of habits, skills, and other non-theoretical capacities: as he also points out, this stance does not naturally imply that computers would perform badly in a formalised micro-world like chess. As the century wore on, chess-playing machines and programs filled the shops and the best programs began to creep up even on chess masters.

The argument was effectively settled, of course, by the famous victory of 'Deep Blue' over Kasparov in 1997. Actually, there are still some arguments which sceptics can deploy. Deep Blue was carefully adjusted during the match on an ongoing basis by human experts to match Kasparov's play and the strategic situation: this parallels the way human champions, including Kasparov, can take advice from a a support team during intervals, but perhaps a more thorough test of the silicon side would have merely allowed Deep Blue to interface with other computers. The presence of human 'advisors' in the process opens the way to an accusation that the match was more a matter of 'play by wire' than genuinely autonomous computation. In the final analysis, though, does it matter? Do computers have to able to beat the world champion before we accept that they can play chess?

Linespace

Should we, in any case, be impressed by the chess prowess of contemporary computers? I think we should, for two reasons: one fairly evident, the other slightly obscure but rather more alarming.

The victory of Deep Blue was presented as very much a matter of brute force computing power - we were encouraged to think that Kasparov had been beaten by a machine, not by a program. This is obviously an over-simplification, but the conquest of chess does represent a victory of sorts for mere processing power, and this has implications for other fields. Computer translation, for example, which has really only achieved the most modest levels of success so far. The underlying problem is that some parts of translation depend on understanding the meaning of the text, which computers can't do. But the task is also surely amenable to brute force in theory. If the program has enough information about alternative translations, and a long enough list of contextual clues to pick up on, the level of error will eventually fall below that of human translators (who, after all, sometimes misunderstand the text themselves). In principle, the 'list' required is vast, possibly even infinite, but if chess is any guide, that need not be an obstacle. Chess has still not been exhausted, and probably never will be: it turned out to be enough to deal with a salient sub-set of cases. It might well turn out that a similar subset of things people are likely to have written would be enough for practically efficient translation. Once translation is out of the way, a seriously effective performance on the Turing Test begins to look possible at last. Passing the Turing test unambiguously isn't really a sign of consciousness (certainly not for a machine which we knew was working on brute force principles) but it would certainly open up a distinct new phase of the argument.

Linespace

The second point springs from the discovery, implicit in the success of brutal computing, that there are distinctively different ways of 'thinking about' chess than the way the human brain does it; it turns out that in some cases these may reveal things the unaided human brain would never have found. The most remarkable achievements in this respect are perhaps brute-force analyses of endgames rather than general chess playing ability. By exhaustively analysing the possiblities, computers have shown that many positions which were previously considered to be inevitable draws can in fact be won, in some cases by extraordinarily long-drawn out sequences of moves. In the case where a king and two bishops faces a king and a knight, it had been chess orthodoxy since the mid-nineteenth century that if the 'knight' side could obtain a certain position, it could draw. This proves to be entirely wrong: in fact, the 'bishops' side has a win in almost all cases. Some of the endgame 'strategies' found by computers are capable of being learnt by human beings, but this particular case is an example of a strategy which is so meandering and bizarre that in spite of considerable efforts, it seems to be beyond the power of even a specialist to understand or learn it. Presumably the number of different tactical contingencies which need to be held in mind at the same time simply exceed the capacity of the human mind, sometimes said to be limited to about seven items.

Linespace

Personally, I find this rather scary. This kind of discovery can perhaps be related to the computer-inspired finding that chaotic results can emerge from the repeated application of relatively simple equations, quite contrary to human intuition, and more loosely to the emergence of mathematical proofs by exhaustive computer examination of all possible cases: proofs which we have to accept but cannot really ever understand.

In short, it suggests that human cognition is actually rather patchy: the gaps in our comprehension of the world in general may actually be quite large, but we don't notice them exactly because they consist of the kind of thing we have trouble recognising. Of course, if Colin McGinn is right, consciousness itself falls into one of these blind spots.

Linespace

Earlier

home

later