Chatbot fever

The Guardian recently featured an article produced by ‘GPT-3, OpenAI’s powerful new language generator’. It’s an essay intended to reassure us humans that AIs do not want to take over, still less kill all humans. GPT-3 also produced a kind of scripture for its own religion, as well as other relatively impressive texts. Its chief advantage, apparently, is that it uses the entire Internet as its corpus of texts, from which to work out what phrase or sentence might naturally come next in the piece it’s producing.

Now I say the texts are impressive, but I think your admiration for the Guardian piece ebbs considerably when you learn that GPT-3 had eight runs at this; then the best bits from all eight were selected and edited together, not necessarily in the original order, by a human. It seems the AI essentially just trawled some raw material from the Internet which was then used by the human editor. The trawling is still quite a feat, though you can safely bet that there were some weird things among the stuff the editor rejected. Overall, it seems GPT-3 is an excellent chatbot, but not really different in kind.

The thing is, for really human-style text, the bot needs to be able to deal with meaning, and none of them even attempt that. We don’t really have any idea of how to approach that challenge; it’s not not that we haven’t made enough progress, rather, we’re not even on the road and have not really got anywhere with finding it. What we have got surprisingly good at is making machines that fake meaningfulness, or manage to do without it. Once it would have seemed pretty unlikely that computer translation would ever be any good, because proper translation involves considering the meanings of words. Of course Google Translate is still highly fallible, but it’s good enough to be useful.

The real puzzle is why people are so eager to pretend that AIs are ready for human style conversations and prose composition. Is it just that so many of us would love a robot pal (I certainly would)? Or some deeper metaphysical loneliness? Is it a becoming but misplaced modesty about human capacities? Whatever it is, it seems to raise the risk that we’ll all end up talking to the nobody in the machine, like budgies chirping at their reflections. I suppose it must be acknowledged that we’ve all had conversations with humans where it seemed rather as if there was no-one at home in there.