I was interested to see this Wired piece recently; specifically the points about how Google picks up contextual clues. I’ve heard before about how Google’s translation facilities basically use the huge database of the web: instead of applying grammatical rules or anything like that, they just find equivalents in parallel texts, or alternatives that people use when searching, and this allows them to do a surprisingly good – not perfect – job of picking up those contextual issues that are the bane of most translation software. At least, that’s my understanding of how it works. Somehow it hadn’t quite occurred to me before, but a similar approach lends itself to the construction of a pretty good kind of chatbot - one that could finally pass the Turing Test unambiguously.
Ah, the oft-promised passing of the Turing Test. Wake me up when it happens – we’ve been round this course so many times in the past.
Strangely enough, this does remind me of one of the things we used to argue about a lot in the past. You’ve always wanted to argue that computers couldn’t match human performance in certain respects in principle. As a last resort, I tried to get you to admit that in principle we could get a computer to hold a conversation with human-level responses just by the brutest of brute force solutions. You just can a perfect response for every possible sentence. When you get that sentence as input, you send the canned response as output. The longest sentence ever spoken is not infinitely long, and the number of sentences of any finite length is finite; so in principle we can do it.
I remember: what you could never grasp was that the meaning of a sentence depends on the context, so you can’t devise a perfect response for every sentence without knowing what conversation it was part of. What would the canned response be to; ‘What do you mean?’ – to take just one simple example.
What you could never grasp was that in principle we can build in the context, too. Instead of just taking one sentence, we can have a canned response to sets of the last ten sentences if we like - or the last hundred sentences, or whatever it takes. Of course the resources required get absurd, but we’re talking about the principle, so we can assume whatever resources we want. The point I wanted to make is that by using the contents of the Internet and search enquiries, Google could implement a real-world brute-force solution of broadly this kind.
I don’t think the Internet actually contains every set of a hundred sentences ever spoken during the history of the Universe.
No, granted; but it’s pretty good, and it’s growing rapidly, and it’s skewed towards the kind of thing that people actually say. I grant you that in practice there will always be unusual contextual clues that the Google chatbot won’t pick up, or will mishandle. But don’t forget that human beings miss the point sometimes, too. It seems to me a realistic aspiration that the level of errors could fairly quickly be pushed down to human levels based on Internet content.
It would of course tell us nothing whatever about consciousness or the human mind; it would just be a trick. And a damaging one. If Google could fake human conversation, many people would ascribe consciousness to it, however unjustifiably. You know that quite poor, unsophisticated chatbots have been treated by naive users as serious conversational partners ever since Eliza, the grandmother of them all. The internet connection makes it worse, because a surprising number of people seem to think that the Internet itself might one day accidentally attain consciousness. A mad idea: so all those people working on AI get nowhere, but some piece of kit which is carefully designed to do something quite different just accidentally hits on the solution? It’s as though Jethro Tull had been working on his machine and concluded it would never be a practical seed-drill; but then realised he had inadvertently built a viable flying machine. Not going to happen. Thing is, believing some machine is a person when it isn’t is not a trivial matter, because you then naturally start to think of people as being no more than machines. It starts to seem natural to close people down when they cease to be useful, and to work them like slaves while they’re operative. I’m well aware that a trend in this direction is already established, but a successful chatbot would make things much, much, worse.
Well, that’s a nice exposition of the paranoia which lies behind so many of your attitudes. Look, you can talk to automated answering services as it is: nobody gets het up about it, or starts to lose their concept of humanity.
Of course you’re right that a Google chatbot in itself is not conscious. But isn’t it a good step forward? You know that in the brain there are several areas that deal with speech; Broca’s area seems to put coherent sentences together while Wernicke’s area provides the right words and sense. People whose Wernicke’s area has been destroyed, but who still have a sound Broca’s area apparently talk fluently and sort of convincingly, but without ever really making sense in terms of the world around them. I would claim that a working Google chatbot is in essence a Broca’s area for a future conscious AI. That’s all I’ll claim, just for the moment.