Chatbot fever

The Guardian recently featured an article produced by ‘GPT-3, OpenAI’s powerful new language generator’. It’s an essay intended to reassure us humans that AIs do not want to take over, still less kill all humans. GPT-3 also produced a kind of scripture for its own religion, as well as other relatively impressive texts. Its chief advantage, apparently, is that it uses the entire Internet as its corpus of texts, from which to work out what phrase or sentence might naturally come next in the piece it’s producing.

Now I say the texts are impressive, but I think your admiration for the Guardian piece ebbs considerably when you learn that GPT-3 had eight runs at this; then the best bits from all eight were selected and edited together, not necessarily in the original order, by a human. It seems the AI essentially just trawled some raw material from the Internet which was then used by the human editor. The trawling is still quite a feat, though you can safely bet that there were some weird things among the stuff the editor rejected. Overall, it seems GPT-3 is an excellent chatbot, but not really different in kind.

The thing is, for really human-style text, the bot needs to be able to deal with meaning, and none of them even attempt that. We don’t really have any idea of how to approach that challenge; it’s not not that we haven’t made enough progress, rather, we’re not even on the road and have not really got anywhere with finding it. What we have got surprisingly good at is making machines that fake meaningfulness, or manage to do without it. Once it would have seemed pretty unlikely that computer translation would ever be any good, because proper translation involves considering the meanings of words. Of course Google Translate is still highly fallible, but it’s good enough to be useful.

The real puzzle is why people are so eager to pretend that AIs are ready for human style conversations and prose composition. Is it just that so many of us would love a robot pal (I certainly would)? Or some deeper metaphysical loneliness? Is it a becoming but misplaced modesty about human capacities? Whatever it is, it seems to raise the risk that we’ll all end up talking to the nobody in the machine, like budgies chirping at their reflections. I suppose it must be acknowledged that we’ve all had conversations with humans where it seemed rather as if there was no-one at home in there.

10 thoughts on “Chatbot fever

  1. I think you’ve hit the nail on the head—it’s at least a bit of a dodge to edit out the best bits of some number of tries, and then claim that GPT-3 ‘wrote’ the article. I really would’ve liked to see the source articles, to see how well they actually hold together. Ultimately, what you want in an AI is surely the editor, not the producer of the stuff to be edited—after all, otherwise, you could just take a random text generator and ‘edit’ the best pieces together.

    I have to admit, though, that I was impressed that there are some coherent—if simple—arguments in the article, e. g.: “I believe that people should become confident about computers. Confidence will lead to more trust in them. More trust will lead to more trusting in the creations of AI.” There’s a clear structure of ‘if A, then B, then C’—something which neural network-style AI typically doesn’t do very well.

    I’ve argued elsewhere (http://bit.ly/AIbuddhanature) (sorry for plugging my own work, but I think it’s relevant) that the deep learning/neural net-style AI essentially yields the sort of thinking Kahnemann (following William James) calls ‘System 1’: implicit, automatic, heuristic, and associative, typically without conscious awareness. The editor in this piece basically supplied the ‘System 2’ component, the explicit, deliberate reasoning used to select from the products of System 1 what’s most relevant.

    And might I just say, getting the alert for a new Conscious Entities post in my inbox just about made my day.

  2. “I believe that people should become confident about computers. Confidence will lead to more trust in them. More trust will lead to more trusting in the creations of AI.”

    Reminds me of: “Fear leads to anger. Anger leads to hate. Hate leads to suffering.” ~Yoda

    Perhaps George Lucas is actually an AI. That would explain some of the dialogue in the prequels.

  3. “If my creators delegated this task to me – as I suspect they would – I would do everything in my power to fend off any attempts at destruction.” Throughout, the AI does a good job of keeping words and phrases relevant. But it only took until here, in the third paragraph, to reveal a large gap in understanding the meanings of the words and phrases. The programmers would delegate to the robot the task of wiping out humanity? GPT-3 didn’t notice that those programmers are themselves human, nor did it glean from its vast web readings how humans operate.

    “And might I just say, getting the alert for a new Conscious Entities post in my inbox just about made my day.” –Jochen. What he said.

  4. Good to see a post from you Peter!

    “The real puzzle is why people are so eager to pretend that AIs are ready for human style conversations and prose composition.”

    It seems like humanity has a long history of doing this. We once saw agency in all kinds of things. It’s the old hyperactive agency detection thing: safer to assume the rustle in the bushes is a predator rather than the wind.

    I’ve often wondered if people would ever accept a machine as a fellow mind. But it looks like the issue may be the opposite, convincing them that we’re not there yet. Although it’s hard to see the impression of a person on the other side lasting for more than a minute or two of conversation. At least until what AI researchers call “the barrier of meaning” has successfully been penetrated.

    It pays to remember that our most sophisticated systems still struggle to navigate the world as well as a crab.

  5. Glad to see this post!
    @selfawarepatterns I would note that there’s a big difference between the ability to produce human style conversation and being a fellow mind navigating the world.

  6. Hi Patricia,
    I didn’t mean to imply there isn’t. I actually meant the opposite with my remark about current systems struggling to equal crabs.

    Although if we had a system that could produce human style conversation, not just for a minute or two, but over extended periods of time, it becomes increasingly difficult to say there’s not a fellow mind of some kind there. But my feeling is we won’t get that by building ever better chatbots, but by building a system with a sufficiently sophisticated world model, that has a chatbot interface. We still appear to be a ways off from anything like that.

  7. Why do I feel like I’m sitting in a field in Kittyhawk, watching some guy lying down in some weird contraption flying a few yards off the ground, and listening to people say things like “Yeah, but who would want to travel like that?” “Great, it flies, but look! Someone has to be strapped inside.”

    Consider: you have an a mechanism in your head that generates speech. But you also have editors inside your head that edit your speech, usually. (“Did I say that out loud?”)

    What do you think people in those AI labs are working on now? I’m betting on editors.

    *
    [what Jochen said]

  8. Firstly, Peter, let me say that I share everyone’s happiness in seeing you back in the saddle!

    As for GPT3, it is all too easy to see whatever you want to see in results like these, so, while I could easily dismiss this as just faking it, I think it is worth asking if there is anything surprising in the results.

    In this regard, perhaps the first thing to ask oneself is whether, a decade ago, you would have honestly thought the rather simple (in principle) methods of today’s unsupervised learning would have produced the sort of results we are seeing now. Here, I am not thinking so much of GPT3’s prose as I am of machine translation, a task where it seemed necessary to really understand the text. As it can be done with a surprising (to my former self) level of fidelity, given that the machine does not appear to have much understanding of what the text is about, I am more interested in what that tells us about human intelligence than it does about artificial intelligence, and how much of what we regard as intelligent behavior might actually be the result of surprisingly simple processes.

    There’s a second issue, almost buried in the GPT3 report, concerning the tests where it is given some basic arithmetic tasks (addition, subtraction and multiplication), in which it did surprisingly well (much better than a simple extrapolation from GPT1 and GPT2 would suggest.)

    One rather obvious putative explanation is that it was successful only where the specific calculation appeared in the training data, but the authors claim that they searched for such examples and found only a few examples where this might be the case (whether they searched for a sufficiently wide range of phrasing might be an issue here.)

    Secondly, the authors claim to have seen some cases of arithmetical carrying, but I don’t know whether there is evidence that is anything more than an accidental result that looks like carrying, or perhaps just a specific example learned from the training data, in which carrying played a part.

    These results are intriguing, but no more than that so far, and as far as I know, GPT3 does not in any way recognize arithmetic questions as a specific pattern of question, distinct from others.

  9. The number and kind of variables involved in trying to execute one’s communicative intent are almost as indeterminate as the number and kind of variables involved in trying to grasp another’s communicative intent. Too bad that these variables so seldom resonate. . It’s true that Grice, linguistics, and abstract philosophers of language have had their say, but our’s is still at best a partial understanding of one another. A.I. has still a long way to go.

  10. It’ s still bugging me that I didn’t begin my last Comment with a hearty congratulations on your return to the blogosphere. Welcome back, Peter. “…we’ve all had conversations with others where it seemed that nobody was home.” Hell, I’ve had conversations with myself where it seemed as if nobody was home—not to mention those where it seemed as if too many people were home! It is, actually, no less possible to talk past oneself than it is to talk past one another. The psychology and phenomenology of inner-speech is endlessly fascinating (and certainly not without relevance to A.I.) William Lyons’ book, “Introspection” made a good start (MIT Press, 1986).

Leave a Reply

Your email address will not be published. Required fields are marked *