Go AI

Go boardThe recent victory scored by the AlphaGo computer system over a professional Go player might be more important than it seems.

At first sight it seems like another milestone on a pretty well-mapped road; significant but not unexpected. We’ve been watching games gradually yield to computers for many years; chess notoriously, was one they once said was permanently out of the reach of the machines. All right, Go is a little bit special. It’s an extremely elegant game; from some of the simplest equipment and rules imaginable it produces a strategic challenge of mind-bending complexity, and one whose combinatorial vastness seems to laugh scornfully at Moore’s Law – maybe you should come back when you’ve got quantum computing, dude! But we always knew that that kind of confidence rested on shaky foundations; maybe Go is in some sense the final challenge, but sensible people were always betting on its being cracked one day.

The thing is, Go has not been beaten in quite the same way as chess. At one time it seemed to be an interesting question as to whether chess would be beaten by intelligence – a really good algorithm that sort of embodied some real understanding of chess – or by brute force; computers that were so fast and so powerful they could analyse chess positions exhaustively. That was a bit of an oversimplification, but I think it’s fair to say that in the end brute force was the major factor. Computers can play chess well, but they do it by exploiting their own strengths, not by doing it through human-style understanding. In a way that is disappointing because it means the successful systems don’t really tell us anything new.

Go, by contrast, has apparently been cracked by deep learning, the technique that seems to be entering a kind of high summer of success. Oversimplifying again, we could say that the history of AI has seen a contest between two tribes; those who simply want to write programs that do what’s needed, and those that want the computer to work it out for itself, maybe using networks and reinforcement methods that broadly resemble the things the human brain seems to do. Neither side, frankly, has altogether delivered on its promises and what we might loosely call the machine learning people have faced accusations that even when their systems work, we don’t know how and so can’t consider them reliable.

What seems to have happened recently is that we have got better at deploying several different approaches effectively in concert. In the past people have sometimes tried to play golf with only one club, essentially using a single kind of algorithm which was good at one kind of task. The new Go system, by contrast, uses five different components carefully chosen for the task they were to perform; and instead of having good habits derived from the practice and insights of human Go masters built in, it learns for itself, playing through thousands of games.

This approach takes things up to a new level of sophistication and clearly it is yielding remarkable success; but it’s also doing it in a way which I think is vastly more interesting and promising than anything done by Deep Thought or Watson. Let’s not exaggerate here, but this kind of machine learning looks just a bit more like actual thought. Claims are being made that it could one day yield consciousness; usually, if we’re honest, claims like that on behalf of some new system or approach can be dismissed because on examination the approach is just palpably not the kind of thing that could ever deliver human-style cognition; I don’t say deep learning is the answer, but for once, I don’t think it can be dismissed.

Demis Hassabis, who led the successful Google DeepMind project, is happy to take an optimistic view; in fact he suggests that the best way to solve the deep problems of physics and life may be to build a deep-thinking machine clever enough to solve them for us (where have I heard that idea before?). The snag with that is that old objection; the computer may be able to solve the problems, but we won’t know how and may not be able to validate its findings. In the modern world science is ultimately validated in the agora; rival ideas argue it out and the ones with the best evidence wins the day. There are already some emergent problems, with proofs achieved by an exhaustive consideration of cases by computation that no human brain can ever properly validate.

More nightmarish still the computer might go on to understand things we’re not capable of understanding. Or seem to: how could we be sure?

Preparing the triumph of brute force?

ray kurzweilThe Guardian had a piece recently which was partly a profile of Ray Kurzweil, and partly about the way Google seems to have gone on a buying spree, snapping up experts on machine learning and robotics – with Kurzweil himself made Director of Engineering.

The problem with Ray Kurzweil is that he is two people. There is Ray Kurzweil the competent and genuinely gifted innovator, a man we hear little from: and then there’s Ray Kurzweil the motor-mouth, prophet of the Singularity, aspirant immortal, and gushing fountain of optimistic predictions. The Guardian piece praises his record of prediction, rather oddly quoting in support his prediction that by the year 2000 paraplegics would be walking with robotic leg prostheses – something that in 2014 has still not happened. That perhaps does provide a clue to the Kurzweil method: if you issue thousands of moderately plausible predictions, some will pay off. A doubtless-apocryphal story has it that at AI conferences people play the Game of Kurzweil. Players take turns to offer a Kurzweilian prediction (by 2020 there will be a restaurant where sensors sniff your breath and the ideal meal is got ready without you needing to order; by 2050 doctors will routinely use special machines to selectively disable traumatic memories in victims of post-traumatic stress disorder; by 2039 everyone will have an Interlocutor, a software agent that answers the phone for us, manages our investments, and arranges dates for us… we could do this all day, and Kurzweil probably does). The winner is the first person to sneak in a prediction of something that has in fact happened already.

But beneath the froth is a sharp and original mind which it would be all too easy to underestimate. Why did Google want him? The Guardian frames the shopping spree as being about bringing together the best experts and the colossal data resources to which Google has access. A plausible guess would be that Google wants to improve its core product dramatically. At the moment Google answers questions by trying to provide a page from the web where some human being has already given the answer; perhaps the new goal is technology that understands the question so well that it can put together its own answer, gathering and shaping selected resources in very much the way a human researcher working on a bespoke project might do.

But perhaps it goes a little further: perhaps they hope to produce something that will interact with humans in a human-like way.  A piece of software like that might well be taken to have passed the Turing test, which in turn might be taken to show that it was, to all intents and purposes, a conscious entity. Of course, if it wasn’t conscious, that might be a disastrous outcome; the nightmare scenario feared by some in which our mistake causes us to nonsensically award the software human rights, and/or  to feel happier about denying them to human beings.

It’s not very likely that the hypothetical software (and we must remember that this is the merest speculation) would have even the most minimal forms of consciousness. We might take the analogy of Google Translate; a hugely successful piece of kit, but one that produces its translations with no actual understanding of the texts or even the languages involved. Although highly sophisticated, it is in essence a ‘brute force’ solution; what makes it work is the massive power behind it and the massive corpus of texts it has access to.  It seems quite possible that with enough resources we might now be able to produce a credible brute force winner of the Turing Test: no attempt to fathom the meanings or to introduce counterparts of human thought, just a massive repertoire of canned responses, so vast that it gives the impression of fully human-style interaction. Could it be that Google is assembling a team to carry out such a project?

Well, it could be. However, it could also be that cracking true thought is actually on the menu. Vaughan Bell suggests that the folks recruited by Google are honest machine learning types with no ambitions in the direction of strong AI. Yet, he points out, there are also names associated with the trendy topic of deep learning. The neural networks (but y’know, deeper) which deep learning uses just might be candidates for modelling human neuron-style cognition. Unfortunately it seems quite possible that if consciousness were created by deep learning methods, nobody would be completely sure how it worked or whether it was real consciousness or not. This is a lamentable outcome: it’s bad enough to have robots that naive users think are people; having robots and genuinely not knowing whether they’re people or not would be deeply problematic.

Probably nothing like that will happen: maybe nothing will happen. The Guardian piece suggests Kurzweil is a bit of an outsider: I don’t know about that.  Making extravagantly optimistic predictions while only actually delivering much more modest incremental gains? He sounds like the personification of the AI business over the years.

Google consciousness

Picture: Google chatbot. Bitbucket I was interested to see this Wired piece recently; specifically the points about how Google picks up contextual clues. I’ve heard before about how Google’s translation facilities basically use the huge database of the web: instead of applying grammatical rules or anything like that, they just find equivalents in parallel texts, or alternatives that people use when searching, and this allows them to do a surprisingly good – not perfect – job of picking up those contextual issues that are the bane of most translation software. At least, that’s my understanding of how it works.  Somehow it hadn’t quite occurred to me before, but a similar approach lends itself to the construction of a pretty good kind of chatbot – one that could finally pass the Turing Test unambiguously.

Blandula Ah, the oft-promised passing of the Turing Test. Wake me up when it happens – we’ve been round this course so many times in the past.

Bitbucket Strangely enough, this does remind me of one of the things we used to argue about a lot in the past.  You’ve always wanted to argue that computers couldn’t match human performance in certain respects in principle. As a last resort, I tried to get you to admit that in principle we could get a computer to hold a conversation with human-level responses just by the brutest of brute force solutions.  You just can a perfect response for every possible sentence. When you get that sentence as input, you send the canned response as output. The longest sentence ever spoken is not infinitely long, and the number of sentences of any finite length is finite; so in principle we can do it.

Blandula I remember: what you could never grasp was that the meaning of a sentence depends on the context, so you can’t devise a perfect response for every sentence without knowing what conversation it was part of.  What would the canned response be to;  ‘What do you mean?’  – to take just one simple example.

Bitbucket What you could never grasp was that in principle we can build in the context, too. Instead of just taking one sentence, we can have a canned response to sets of the last ten sentences if we like – or the last hundred sentences, or whatever it takes. Of course the resources required get absurd, but we’re talking about the principle, so we can assume whatever resources we want.  The point I wanted to make is that by using the contents of the Internet and search enquiries, Google could implement a real-world brute-force solution of broadly this kind.

Blandula I don’t think the Internet actually contains every set of a hundred sentences ever spoken during the history of the Universe.

Bitbucket No, granted; but it’s pretty good, and it’s growing rapidly, and it’s skewed towards the kind of thing that people actually say. I grant you that in practice there will always be unusual contextual clues that the Google chatbot won’t pick up, or will mishandle. But don’t forget that human beings miss the point sometimes, too.  It seems to me a realistic aspiration that the level of errors could fairly quickly be pushed down to human levels based on Internet content.

Blandula It would of course tell us nothing whatever about consciousness or the human mind; it would just be a trick. And a damaging one.  If Google could fake human conversation, many people would ascribe consciousness to it, however unjustifiably. You know that quite poor, unsophisticated chatbots have been treated by naive users as serious conversational partners ever since Eliza, the grandmother of them all. The internet connection makes it worse, because a surprising number of people seem to think that the Internet itself might one day accidentally attain consciousness. A mad idea: so all those people working on AI get nowhere, but some piece of kit which is carefully designed to do something quite different just accidentally hits on the solution? It’s as though Jethro Tull had been working on his machine and concluded it would never be a practical seed-drill; but then realised he had inadvertently built a viable flying machine. Not going to happen. Thing is, believing some machine is a person when it isn’t is not a trivial matter, because you then naturally start to think of people as being no more than machines.  It starts to seem natural to close people down when they cease to be useful, and to work them like slaves while they’re operative. I’m well aware that a trend in this direction is already established, but a successful chatbot would make things much, much, worse.

Bitbucket Well, that’s a nice exposition of the paranoia which lies behind so many of your attitudes. Look, you can talk to automated answering services as it is: nobody gets het up about it, or starts to lose their concept of humanity.

Of course you’re right that a Google chatbot in itself is not conscious. But isn’t it a good step forward?  You know that in the brain there are several areas that deal with speech;  Broca’s area seems to put coherent sentences together while Wernicke’s area provides the right words and sense. People whose Wernicke’s area has been destroyed, but who still have a sound Broca’s area apparently talk fluently and sort of convincingly, but without ever really making sense in terms of the world around them. I would claim that a working Google chatbot is in essence a Broca’s area for a future conscious AI. That’s all I’ll claim, just for the moment.